Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tarek Azzam is active.

Publication


Featured researches published by Tarek Azzam.


American Journal of Evaluation | 2013

Finding a Comparison Group: Is Online Crowdsourcing a Viable Option?.

Tarek Azzam; Miriam R. Jacobson

This article explores the viability of online crowdsourcing for creating matched-comparison groups. This exploratory study compares survey results from a randomized control group to survey results from a matched-comparison group created from Amazon.com’s MTurk crowdsourcing service to determine their comparability. Study findings indicate that online crowdsourcing, a process that allows access to many participants to complete specific tasks, is a potentially viable resource for evaluation designs where access to comparison groups, large budgets, and/or time are limited. The article highlights the strengths and limitations of the online crowdsourcing approach and describes ways that it could potentially be used in evaluation practice.


American Journal of Evaluation | 2010

Evaluator Responsiveness to Stakeholders.

Tarek Azzam

A simulation study was conducted in an attempt to examine how evaluators modify their evaluation design in response to differing stakeholder groups. In this study, evaluators were provided with a fictitious description of a school-based program. They were then asked to design an evaluation of the program. After the evaluation design decisions were made, evaluators were presented with feedback from three differing stakeholder groups (i.e., decision maker, implementer, recipient) either endorsing or rejecting the evaluation design. Evaluators were then given the opportunity to modify (or not modify) their original design in response to stakeholder feedback. The findings revealed that the more political power or influence stakeholder groups held over evaluation logistical factors (i.e., funding, data access), the more evaluators were willing to modify their design choices to accommodate perceived stakeholder concerns. These design modifications were typically implemented to ensure data access, reduce stakeholder resistance, and increase stakeholder buy-in.


American Journal of Evaluation | 2013

The Nature and Frequency of Inclusion of People with Disabilities in Program Evaluation

Miriam R. Jacobson; Tarek Azzam; Jeanette G. Baez

Although evaluation theorists over the last two decades have argued for the importance of including stakeholders from marginalized groups in program planning and research, little is known about the degree of inclusion in program evaluation practice. In particular, we know little about the type and level of inclusion of people with intellectual, developmental, and psychiatric disabilities in the evaluation of programs that aim to serve them. Through a content analysis of articles published in the last decade describing evaluations of programs for people with these types of disabilities, this article describes which stakeholders have been included in evaluations, how program recipient input was obtained, and in which stages of the evaluation stakeholder participation occurred. The findings indicate that program recipient disability type (developmental, psychiatric, or other) may predict type and level of inclusion, and inclusion tends to occur in later parts of the evaluation process.


American Journal of Evaluation | 2013

GIS in Evaluation: Utilizing the Power of Geographic Information Systems to Represent Evaluation Data

Tarek Azzam; David Robinson

This article provides an introduction to geographic information systems (GIS) and how the technology can be used to enhance evaluation practice. As a tool, GIS enables evaluators to incorporate contextual features (such as accessibility of program sites or community health needs) into evaluation designs and highlights the interactions between programs and their environments. Evaluators can formatively utilize GIS to examine implementation issues and their connections to the communities served and summatively to study program impacts and the factors contributing to variations between program sites. Improvements in technology as well as in data storage and access make this a feasible tool for a broader range of users. Through a hypothetical case study, the article discusses the strengths, limitations, and future trends of GIS in the context of the evaluation field.


Evaluation and Program Planning | 2015

Politics in evaluation: Politically responsive evaluation in high stakes environments

Tarek Azzam; Bret Levine

The role of politics has often been discussed in evaluation theory and practice. The political influence of the situation can have major effects on the evaluation design, approach and methods. Politics also has the potential to influence the decisions made from the evaluation findings. The current study focuses on the influence of the political context on stakeholder decision making. Utilizing a simulation scenario, this study compares stakeholder decision making in high and low stakes evaluation contexts. Findings suggest that high stakes political environments are more likely than low stakes environments to lead to reduced reliance on technically appropriate measures and increased dependence on measures better reflect the broader political environment.


American Journal of Evaluation | 2017

Evaluator Training Needs and Competencies: A Gap Analysis

Nicole Galport; Tarek Azzam

The systematic identification of evaluator competency training needs is crucial for the development of evaluators and the establishment of evaluation as a profession. Insight into essential competencies could help align training programs with field-specific needs, therefore clarifying expectations between evaluators, educators, and employers. This investigation of practicing evaluators’ perceptions of competencies addresses the critical need for a competency training gap analysis. Results from an online survey of 403 respondents and a follow-up focus group indicate that the professional practice and systematic inquiry competencies are seen as most important for conducting successful evaluations. Evaluators identified need for additional training in the interpersonal competence and reflective practice competency domains. Trends identified can support the development and modification of programs designed to offer training, education, and professional development to evaluators.


Evaluation and Program Planning | 2016

Crowdsourcing for quantifying transcripts: An exploratory study

Tarek Azzam; Elena Harman

This exploratory study attempts to demonstrate the potential utility of crowdsourcing as a supplemental technique for quantifying transcribed interviews. Crowdsourcing is the harnessing of the abilities of many people to complete a specific task or a set of tasks. In this study multiple samples of crowdsourced individuals were asked to rate and select supporting quotes from two different transcripts. The findings indicate that the different crowdsourced samples produced nearly identical ratings of the transcripts, and were able to consistently select the same supporting text from the transcripts. These findings suggest that crowdsourcing, with further development, can potentially be used as a mixed method tool to offer a supplemental perspective on transcribed interviews.


Evaluation Review | 2016

Methodological Credibility An Empirical Investigation of the Public’s Perception of Evaluation Findings and Methods

Miriam R. Jacobson; Tarek Azzam

Background: When evaluations are broadly disseminated, the public can use them to support a program or to advocate for change. Methods: To explore how evaluations are perceived and used by the public, individuals in a sample of 425 people in the United States were recruited through an online crowdsourcing service called Mechanical Turk (www.mturk.com). Participants were randomly assigned to receive different versions of a press release describing a summative evaluation of a program. Each condition contained a unique combination of methods (e.g., randomized controlled design) and findings (positive or negative) to describe the evaluation and its findings. Participants in each condition responded to questions about their trust in the content of the evaluation findings and their attitudes toward the program. Results: Results indicated that the type of evaluation methods and the direction of the findings both influenced the credibility of the findings and that the credibility of the findings moderated the relationship between the direction of the evaluation findings and attitudes toward the evaluated program. Additional evaluation factors to explore in future research with the public are recommended.


American Journal of Evaluation | 2016

Does Research on Evaluation Matter? Findings From a Survey of American Evaluation Association Members and Prominent Evaluation Theorists and Scholars

Chris L.S. Coryn; Satoshi Ozeki; Lyssa N. Wilson; Gregory D. Greenman; Daniela C. Schröter; Kristin A. Hobson; Tarek Azzam; Anne T. Vo

Research on evaluation theories, methods, and practices has increased considerably in the past decade. Even so, little is known about whether published findings from research on evaluation are read by evaluators and whether such findings influence evaluators’ thinking about evaluation or their evaluation practice. To address these questions, and others, a random sample of American Evaluation Association (AEA) members and a purposive sample of prominent evaluation theorists and scholars were surveyed. A majority of AEA members (80.95% ± 7.60%) and sampled theorists and scholars (84.21%) regularly read research on evaluation and indicate that research on evaluation has influenced their thinking about evaluation and their evaluation practice (97.00% ± 3.38% and 94.00% ± 4.79%, for AEA members, and 100% and 100%, for prominent theorists and scholars, respectively).


Evaluation and Program Planning | 2018

Evaluative feedback delivery and the factors that affect success

Tarek Azzam; Cristina E. Whyte

This study examines the factors that can affect the credibility, influence, and utility of evaluative feedback. These factors include the delivery strategy, accuracy, and type (positive/negative) of feedback provided. In this study over 500 participants were asked to complete a task, and were then randomly assigned to different conditions with varied feedback delivery methods, feedback accuracy, and types of feedback (positive/negative). Then they were asked questions about the feedbacks credibility, influence, and utility.

Collaboration


Dive into the Tarek Azzam's collaboration.

Top Co-Authors

Avatar

Miriam R. Jacobson

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar

Elena Harman

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar

Bret Levine

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar

Cristina E. Whyte

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar

Michael Szanyi

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar

Anne T. Vo

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Chris L.S. Coryn

Western Michigan University

View shared research outputs
Top Co-Authors

Avatar

Christina A. Christie

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar

Cyn Yamashiro

Loyola Marymount University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge