Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lesley Jolly is active.

Publication


Featured researches published by Lesley Jolly.


European Journal of Engineering Education | 2012

Project-based learning as a contributing factor to graduates’ work readiness

Margaret Jollands; Lesley Jolly; Tom Molyneaux

This paper explores what work readiness means for two cohorts of graduate engineers, one from a traditional curriculum, the second from a largely project-based curriculum. Professional bodies and employers have defined a set of attributes for engineering graduates so that graduates will be ‘work ready’. Problem-based learning (PBL) is claimed to be a suitable approach to develop such skills. The graduates were interviewed some months after starting work, along with their managers. All the graduates recognised the benefits of taking PBL subjects as well as vacation work, with success in communication attributed more to PBL. Both cohorts had similar learning outcomes, high skill levels in project management, problem solving, communication skills, research and sustainability. A skills gap in ethics was identified for both cohorts of graduates and their managers. Further work is planned to link skill development with undergraduate learning experience.


frontiers in education conference | 2011

Effective evaluation strategies to meet global accreditation requirements

Lyn Brodie; Frank Bullen; Lesley Jolly

With the ongoing internationalisation of the engineering profession there is an ever increasing need for universities to provide robust evaluation of the quality of their undergraduate degree programs and to benchmark that quality internationally. It is important that the claims made of course evaluation and renewal, during the evaluation-accreditation process, can be substantiated and the tenuous connection between course evaluation and international acceptance as a professional engineer, be strengthened. There are a variety of methods used to evaluate courses and programs including student questionnaires, final grades, progression-retention data, and graduate attribute and competency mapping. The authors compared typical examples of such approaches to study the robustness of the link between the data collected and the evaluative judgments. It was found that there is a great deal of inference involved in the process and that the causative link between curriculum design and pedagogy, and skills and attributes, is often tenuous. Some of these approaches should not be taken as final evaluation outcomes, but rather inputs to a larger overarching evaluation strategy. It was concluded that a “program logic” approach such as that used by the University of Wisconsin, Extension, Program Development and Evaluation Model offers a superior approach for capturing and assessing the causal connections between local evaluation and international accreditation.


Australasian. Journal of Engineering Education | 2010

Influences on Student Learning in Engineering: Some Results from Case Study Fieldwork

Wageeh W. Boles; Lesley Jolly; Roger Hadgraft; Prue Howard; Hilary Beck

Abstract This paper closely examines factors affecting students’ progression in their engineering programs through fieldwork conducted at three Australian universities. To extract clues on how specific teaching methods can be used to maximise learning, the investigation considered factors such as understanding how students take in, process and present information. A number of focus groups were conducted with students, and the data gathered was combined with survey results of students’ and academics’ learning styles. The paper reports on the process followed and provides some analysis of the gathered data, as part of an Australian Learning and Teaching Council Associate Fellowship program.


The Aboriginal Child at School | 1995

Waving a Tattered Banner? Aboriginal Language Revitalisation

Lesley Jolly

This paper examines the philosophy and practice of programs that aim to maintain, renew, or revive Aboriginal languages in Australia. I focus here on languages, mainly those of urban and rural rather than remote areas, for which there are few if any fluent speakers left. I will refer to them as dead or dying languages although I am aware that in the Aboriginal tradition people consider themselves to own languages that neither they themselves nor their dose kin speak, and that this ownership is very much a part of a living culture. I begin by reviewing some basic issues that arise in planning language programs for such languages. The final section considers some of the factors affecting the success of such programs.


European Journal of Engineering Education | 2017

Australian engineering educators’ attitudes towards Aboriginal cultures and perspectives

Thomas Goldfinch; Juliana Kaya Prpic; Lesley Jolly; Elyssebeth Leigh; Jade Kennedy

ABSTRACT In Australia, representation of Aboriginal populations within the engineering profession is very low despite participation targets set by Government departments, professional bodies and Universities. Progressing the Aboriginal inclusion agenda within Australian Engineering Education requires a clearer understanding of engineering educators’ preparedness for increased numbers of students from this non-traditional cohort. This research stems from a recently completed project that explored Aboriginal perspectives in engineering education and proposed a model for embedding perspectives in curricula. Nine engineering academics were interviewed to explore attitudes towards Aboriginal perspectives in engineering and the viability of the proposed model. Results of the interviews indicate efforts to embed Aboriginal perspectives are starting from a small base of knowledge and experience. Individuals’ motivations and values indicate that there is significant support for improving this, but that efforts can be hampered by conceptions of Aboriginal perspectives that do not consider how Aboriginal knowledges may change engineering itself.


Australasian. Journal of Engineering Education | 2018

Diversifying methods in educational research: what we learned at Winter School

Esther Matemba; Lyndal Parker; Lesley Jolly

ABSTRACT Australasian Association for Engineering Education (AAEE) has been sponsoring a Winter School in Engineering Education Research Methods since 2011. This paper describes how in 2017 attendees at the School applied what they had learned about a little-used data-gathering technique: observation. Starting with a Program Logic analysis, which identifies what an intervention ought to be doing, and hence what kind of evidence needs to be collected to describe its effect, some participants who had attended prior Winter Schools, were given the chance to collect the evidence. They found observation to be much harder to do well than one would think. This paper describes their experience and argues for the use of observational techniques in order to triangulate our data-gathering methods and improve the quality of our educational research. However, we also learned that a great benefit of observation comes from sustained reflection on the process and the data collected. Without such reflection, we argue observation is likely to produce rather thin results. abbriviations: AAEE - Australasian Association of Engineering Education; JEE - Australasian Journal of Engineering Education; WS2017 - 2017 Winter School


Archive | 2014

Simulating work: can simulators help develop a workforce?

Lydia Kavanagh; Lesley Jolly; Liza O'Moore; Gregory Tibbits

The aviation model of simulator training emphasises realistic physical conditions and practice of emergency responses. Its apparent success has led to the adoption of simulators in other industries such as rail. Relatively light levels of use of the simulators in that industry indicate that simulators may not fit well in all industries, no matter how similar their operations may seem. This leads us to ask what needs to be simulated in workplace development settings and whether better-targeted simulation might expand the ways in which simulators can be used. Much of the existing technical discussion of simulators comes from a human factors perspective which focuses on micro-processes in performance. We argue for a more socio-cultural and socio-technical position that simulators can develop workforce competency only when jobs are understood in their socio-cultural settings and the role of technology is understood as relative to and determined by that setting. We also present ways in which industry can approach the identification of targets for simulator use and implementation strategies. These suggestions have the potential not only to save money but also contribute to a more professional and engaged workforce.


Archive | 2014

Educational Technologies and the Training Curriculum

Lesley Jolly; Gregory Tibbits; Lydia Kavanagh; Lisa O’Moore

Technologies such as online tools, simulations and remote labs are often used in learning and training environments, both academic and vocational, to deliver content in an accessible manner. They promise efficiencies of scale, flexibility of delivery and face validity for a generation brought up on electronic devices. However, learning outcomes are not the same in all circumstances and contextual and cultural factors can lead to the failure of technology that has been successful elsewhere. This chapter draws on the team’s studies of the use of simulators and simulations within the vocational environment of the Australian rail industry to consider how the broader context of the training/educational curriculum affects what works for whom under what circumstances.


Archive | 2014

The Developmental Role of Competence Assurance

Dr.Dr. Liza O’Moore; Lesley Jolly; Lydia Kavanagh

Competence assurance (CA) is a process of ensuring that the workforce is able to carry out its work in a safe and competent manner. It can entail disruptive and expensive regular assessments of workers’ performance, require employers to ‘backfill’ positions during the process, and provide little obvious direct benefit for the business. It may not provide accurate assessment if workers are withdrawn from duties for assessment as their performance obviously cannot be the same as under working conditions. If workers are assessed in media res there are issues around the potential observer effect on assessment outcomes. Employers and workplace assessors need ways of assessing performance that accurately target what is of interest with minimum disruption and risk. However, we will argue that the CA process represents an opportunity lost in terms of workforce development, if what is of interest is narrowly defined as present job skills with little attention paid to workers’ competencies as a whole. The key theoretical issues to be addressed here relate to authenticity in assessment in workplace training and include consideration of how competence/competency is defined. We consider how to achieve authentic assessment in safety-critical workplace settings in a way that will allow for targeted workforce development in the future. A change away from current practices to portfolio-based and 360-degree assessment has the potential to describe more accurately where skills and deficits lie, help companies identify personnel with needed competencies and provide relevant support for their development within a chosen career path, and to help workers identify their skills and goals and how they may be pursued within the industry/company.


Learning Communities: international journal of learning in social contexts | 2014

Telling Context from Mechanism in Realist Evaluation: The role for theory

Hannah Jolly; Lesley Jolly

Realist evaluation is based on the premise that aspects of context trigger particular mechanisms in response to an intervention, which result in observable outcomes. This is often expressed in the formula C+M=O. Contexts are defined as the conditions that an intervention operates in (often but not exclusively sociocultural), while mechanisms are understood to be the future action that people take in response to the intervention. There is much debate, however, about the definitions and because distinctions are not clear-cut it can be difficult to decide which is which, particularly when the intervention concerns some program of curricular intervention. In this paper we discuss how we resolved this dilemma in an evaluation of a curriculum change in 13 universities in Australia and New Zealand. In that case we found a cascade of contexts and mechanisms, whereby what was a mechanism from one point of view (such as the decisions involved in course design) became a context triggering later mechanisms (such as teacher and student behaviours). The scholarly literature defining curriculum helped us to organise our thinking and subsequent analysis in a rational way, but in many evaluations there may not be a handy body of work that discusses how to understand the topic of the intervention in this way, nor do many consultant evaluators have the luxury of long hours in the library. We consider some ways in which evaluators might decide on defining contexts and mechanisms in principled ways and some of the consequences of those decisions. Learning Communities International Journal of Learning in Social Contexts | Special Issue: Evaluation | Number 14 – September 2014 29 introduction The contribution of Pawson and Tilley’s (1997) realist approach to program evaluation has constituted a significant shift from available methods. It is most simply understood as a method for evaluating “what works for whom in what circumstances” (Pawson & Tilley, 1997). Rather than focus on global judgements about the worth of a program, it seeks to identify the varieties of success and failure that any program experiences and the factors that contribute to all of the eventual outcomes. The basic premise is that there will be a range of conditions, often sociocultural, that affect the outcomes of any program. These are referred to as Contexts (C). In addition the ways in which people respond – their reasoning about what they should do and the resources they can bring to bear (Pawson & Tilley, 1997, p.67) – will also vary. In the realist approach this is referred to as the Mechanism (M). Hypotheses about how the program results in observed outcomes (O) is often expressed in the formula C + M = O (CMO). The attraction of this approach lies in the fact that it notes real life programs are rarely entirely successful or entirely unsuccessful, but have patches of success and failure. Also, it is common to find that a program judged to have worked well in one place fails in another or in subsequent years. Realist Evaluation (RE) not only focusses on underlying factors behind outcomes but the various ways in which they can combine and recombine to cause outcomes. Since its publication, the approach has been widely taken up and applied with varying methodological success (Pawson & Manzano-Santaella, 2012), suggesting that the application of the method is not so simple. Pawson and Manzano-Santaella (2012, p. 176) have now published a discussion of some of the challenges of the “practice on the ground,” including the oft expressed problem of “I am finding it hard to distinguish Cs from Ms and Os, what is the secret?” (Pawson & Manzano-Santaella, 2012, p. 188). Whilst their paper discusses this issue in some detail, we will also address the subtleties of this challenge, and attempt to explore their recommendation that “which property falls under which category is determined by its explanatory role” (Pawson & Manzano-Santaella, 2012, p. 187). the challenges of applying the realist evaluation approach Whilst much of the discussion of the difficulties of applying the realist approach is given over to understanding the differences in function between Contexts and Mechanisms, this may be premature if a suitable understanding of the function of a CMO configuration as a whole is not applied to the process of evaluation. In their 2012 “workshop” on the method, Pawson and Manzano-Santaella (2012, p. 188) emphasise that “the function of CMO configurations...is that they are rather narrow and limited hypotheses, which attempt to tease out specific causal pathways, as prespecified mechanisms, acting in pre-specified contexts spill out into pre-specified and testable outcome patterns.” That is to say, these configurations are sensitive to the actual moment in the intervention process being considered. They need to be used at appropriate times and in appropriate ways during the data analysis if they are to help us to make meaningful evaluations. Telling Context from Mechanism in Realist Evaluation: The role for theory | Jolly 30 In our case, we had an idea of what the intervention was meant to achieve and how it was meant to achieve it, and we began analysis by trying to define contexts and mechanisms directly from the data. When we took this approach we found that it led us in circles. This is because the function of variables in a moment of analysis (that is, whether a variable acts as a C (Context) or as an M (Mechanism) is very much dependent on the focus of explanation at a given point in the analysis. Something which is a mechanism at one stage of an intervention, such as the reasoning leading to particular decisions about how to design and implement a program, may then produce a fresh context for a later stage, such as the way subjects strategise in response to the program design. This situation was complicated in the example evaluation by the fact that the program of intervention was taking place in multiple sites, and with differing purposes and methods of implementation in each site. We knew that the focus of explanation needed to vary from site to site, but had not yet pinned down how. Add to this that the program in question concerned a curricular innovation (the notion of curriculum being notoriously slippery), and we quickly discovered that analysis of the data we had collected was creating more questions than answers. As Pawson and Manzana-Santaella (2012, p. 178) reiterate, “realist evaluation is [or should be] avowedly theory-driven; it searches for and refines explanations of program effectiveness.” While it can be daunting to be told that more theory is needed, in our case it turned out that the theory that helped us to define the specific causal pathways to be investigated was a quite practical one about the nature of curriculum. While this is a highly debated topic, once we had settled on an understanding of what “curriculum” encompasses and how the various elements interact, the evaluation task became much easier. the example evaluation The evaluation in question was of a program of curricular innovation that had taken place at a variety of universities across Australia and New Zealand. The program involved the introduction of the Engineers Without Borders (EWB) Challenge into the first year engineering curriculum. The EWB Challenge was conceived as a means of exposing students to the principles of engineering design and problem solving, by providing a design challenge based on the requirements of a real, third-world community who have worked with EWB on sustainable development projects. This program of innovation constituted a “widespread curriculum renewal in engineering education”, because: The first year in engineering had traditionally focussed on basic science and maths and the introduction of the Challenge and its associated team-based project work allowed for development of the so-called “soft skills” amongst the graduate attributes: communication and teamwork and an understanding of the need for sustainable development. The Challenge has been in operation since 2008 and every engineering school in Australia has made some use of them at one time or another. This [evaluation] project was carried out with the co-operation of 13 Learning Communities International Journal of Learning in Social Contexts | Special Issue: Evaluation | Number 14 – September 2014 31 universities from Australia and New Zealand who have maintained their use of the projects, albeit in widely divergent types of student cohort and courses. (Jolly, 2014, p. 3) Thus, the evaluation was seeking to understand both how the program had been applied differently in different sites, and for different purposes, and what contributed to local success and failures. As such, the evaluation was focused on both process and outcome, in that it sought to discover both how the intervention worked, and with what effect. Realist Evaluation (RE) is ideal for this kind of multi-site, multi-context situation where correlations between variables are unlikely to apply in all cases and an understanding of the range of generative causation that can apply is required. In CMO terms, the ideal, desired operation of the intervention could be expressed in a highly compressed form (Table 1). It needs to be noted that there are dangers in such shorthand representations of CMO configurations (Pawson & Manzana-Santaella, 2012), which we will discuss further below. For now we acknowledge that this hypothesis about how the program should work includes many finer grained levels of CMO configuration. In fact it was the task of the evaluation to find out just what those finer-grained configurations were. Table 1: The ideal CMO configuration for the program (based on Jolly, 2014) Context (C) + Mechanism (M) = Outcome (O) • First year engineering curricula emphasise technical and theoretical subjects and pay little attention to practical “realworld” engineering. • Need to develop so-called “soft skills” such as c

Collaboration


Dive into the Lesley Jolly's collaboration.

Top Co-Authors

Avatar

Lydia Kavanagh

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lyn Brodie

University of Southern Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Merrilyn Goos

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Laurie Buys

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liza O'Moore

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Alex Kostogriz

Australian Catholic University

View shared research outputs
Researchain Logo
Decentralizing Knowledge