Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shannon M. Chance is active.

Publication


Featured researches published by Shannon M. Chance.


Educational Planning: The Journal of the International Society for Educational Planning | 2009

Assessing university strategic plans: A tool for consideration

Shannon M. Chance; Brenda T. Williams

This article explores the use of rubrics as tools for assessing the quality of university-developed strategic plans. While tools exist for assessing the quality of the planning process, such tools are not readily available for assessing the resulting product or the overall quality of the plan itself. Specifi cally, a rubric is described that has been designed and fi eld-tested for the purpose of evaluating the strategic planning document produced at the university level. This approach builds upon outcome assessment methods developed in the business sector and proposes a tool tailored for use in higher education. INTRODUCTION This article explores the use of rubrics as tools for assessing the quality of university-developed strategic plans. While tools exist for assessing the quality of the planning process, such tools are not readily available for assessing the product or the overall quality of the plan itself (Allison & Kaye, 2005; Kaufman & Grise, 1995). A number of tools do currently exist for evaluating business–related plans (University of Wisconsin, 2005). However, these tools are grounded in linear business models, which are not well suited to the variables found in higher education settings (Presley & Leslie, 1999; Rowley, Lujan, & Dolence, 1997; Shahjahan, 2005; Swenk, 2001). Rowley, Lujan, and Dolence identified and described a host of differences between business and education and they clearly articulated the need for planning approaches tailored to higher education. The rubric described in this paper was designed by a team of doctoral students enrolled in an educational planning course at The College of William and Mary. The team’s methods for developing and field-testing an assessment rubric are described. The objective is to share findings of the project with scholars interested in educational planning. This assignment represents an effort to address concerns raised by Adams (1991). Adams articulated three crisis areas in planning: (a) definition and identity, (b) intellectual or scientific foundation/theory, and (c) evidence of success and utility. This article seeks to address one of these areas – success and utility – which may in turn enhance an understanding of identity, purpose, and intentionality of strategic planning in higher education (Chickering & Reisser, 1993). The overarching framework used in this assessment rubric is built upon Holcomb’s (2001) five critical questions for fostering change in schools and Allison and Kaye’s (2005) recommendations regarding strategic planning for non-profit organizations. The call to conduct this type of assessment has come from both inside and outside of higher education. Such assessment helps address demands for increased accountability in higher education and across the non-profit sector. DEFINITION OF STRATEGIC PLANNING There are many ways to define strategy. Pearson (1990) described the way strategy is used in higher education: to set direction, to focus effort, to guide “consistent concentration of effort through time,” and to promote flexibility (as cited in Presley & Leslie, 1999, p. 202). Rowley, Lujan, and Dolence (1997) define strategic planning as “a formal process designed to help an organization identify and maintain optimal alignment with the most important elements of its environment” (p. 15). Presley and Leslie (1999) remind us that the main goal of strategic planning in higher education is to enhance practice. They join Rowley, Lujan, and Dolence (1997) in noting the best result of planning is providing guidance to an organization. While traditional planning was operations-driven, today’s strategic planning constitutes a process for seeking opportunity. Most contemporary organizations actually still use traditional (rather than genuinely strategic) planning methods, and thus miss opportunities for creative and proactive response. Many scholars in the field of planning agree that by defining a collective vision and charting a course aligned with this vision – through a truly strategic and ongoing planning process – an organization can effectively respond to unforeseen challenges in advantageous ways (Barnetson, 2001; Cutright, 2001; Gordon, 2002; Rowley, et al., 1997; Swenk, 2001).


Archive | 2015

Designing the Identities of Engineers

Mike Murphy; Shannon M. Chance; Eddie Conlon

In 2007 Gary Downey, Juan Lucena and Carl Mitcham argued that a “key issue in ethics education for engineers concerns the relationship between the identity of the engineer and the responsibilities of engineering work”. They suggested that “one methodological strategy for sorting out similarities and differences in engineers’ identities is to ask the ‘who’ question. Who is an engineer? Or, what makes one an engineer?” (Downey et al. 2007). This chapter explores these questions of who is an engineer and what makes one an engineer by examining how engineering and engineering technology students in Dublin Institute of Technology (DIT) describe and differentiate themselves. DIT offers both 4-year engineering degrees (that are equivalent to the educational standard required for professional status) and 3-year degrees in engineering technology. Annually DIT graduates the largest combined number of engineering and engineering technology majors in the country. We present results that show that there is no distinct sense of identity for a technologist. For faculty as well as engineering students and engineering technology students, design is perceived as a key differentiating activity that separates the engineer from the engineering technologist. Paradoxically, while all students chose DIT based on its reputation and practical focus, it is engineering technology students who indicated they are prepared for the ‘real world’ as they near graduation. Results also show, in terms of their own responses, that engineering and engineering technology students have fairly consistent views of their education and preparation for the workforce.


International Journal of Engineering Education | 2016

Using architecture design studio pedagogies to enhance engineering education

Shannon M. Chance; John Marshall; Gavin Duffy

Problem-Based Learning pedagogies that require high levels of inquiry and hands-on engagement can enhance studentlearning in engineering. Such pedagogies lie at the core of studio-based design education, having been used to teacharchitects since the Renaissance. Today, design assignments and studio-based learning formats are finding their way intoengineering programs, often as part of larger movements to implement Student-Centered, Problem-Based Learning (PBL)pedagogies. This spectrum of pedagogies is mutually supportive, as illustrated in the University of Michigan’sSmartSurfaces course where students majoring in engineering, art and design, and architecture collaborate on wickedlycomplex and ill-defined design problems. In SmartSurfaces and other similar PBL environments, students encountercomplex, trans-disciplinary, open-ended design prompts that have timely social relevance.Analyzing data generated in studio-based PBL courses like SmartSurfaces can help educators evaluate and trackstudents’ intellectual growth. This paper presents a rubric for measuring students’ development of increasingly refinedepistemological understanding (regarding knowledge and how it is created, accessed, and used). The paper illustratesuse ofthe tool in evaluating blogs created by students in SmartSurfaces, which in turn provides evidence to help validate therubric and suggest avenues for future refinement. The overall result of the exploratory study reported here is to provideevidence of positive change among students who learn in PBL environments and to provide educators with a preliminarytool for assessing design-related epistemological development. Findings of this study indicate design-based education canhave powerful effects and collaborating across disciplines can help engineering students advance in valuable ways.


Archive | 2010

Assessment Formats: Student Preferences and Perceptions

Michael Seymour; Shannon M. Chance

maximum 300 words): This paper provides a student perspective on the variety of forms of design critique available to educators. In architecture and landscape architecture, the design jury remains the dominant format for providing feedback to students. In recent years this format has come under scrutiny and its effectiveness called into question. However, little research has been done into the variety of alternative or supplemental formats available to educators. This paper explores an array of techniques that the authors have employed in design studio courses (which include techniques suggested by students in Webster’s 2007 article in the Journal of Architectural Education). These include written and verbal forms of feedback, peer and self evaluations, feedback provided during the design process and variations in the jury format. The benefits and limitations of each of the techniques are explored through presentation of the results of two web-based surveys of students. The student surveys were conducted department-wide at the Mississippi State University Department of Landscape Architecture and the Hampton University Department of Architecture. The surveys consisted of a series of Likert-scaled and open ended questions focused on the students’ perceptions of the educational and motivational value of each technique. Students were also asked to rank the various techniques in order of preference and explain why they found the techniques helpful or not. Responses demonstrated a clear preference for one-on-one forms of evaluation. This result has raised a number of questions relative to students’ preparation for professional practice and the role of educators in fostering student independence. This paper explores these issues as well as the benefits and limitations of each technique in an effort to assist educators in making informed use of the various assessment formats. Introduction While criticism is an essential part of every designer’s education, a design critique can be a powerful and even emotional experience for many students. Although this feedback is intended to promote learning, it is sometimes distressing for beginning design students. They are often unprepared for criticism and easily misunderstand comments or even the purpose of the event. These early experiences are sometimes so powerful that they drive students away from the design professions before they have even had an opportunity to develop the skills or vocabulary necessary to succeed. For many others, bad critique experiences negatively influence their general attitude toward design criticism and therefore hinder their long-term development. Even among graduating students and practitioners, a healthy attitude toward design critique seems to be a rare trait, although there are probably few attributes more valuable in professional practice. The ability to speak and write coherently about design, as well as openness to client, peer and community input require training and practice. Educators can help to foster these skills by providing effective, encouraging and constructive feedback tailored to students’ needs. This paper provides student perspective on a variety of critique formats, including written and verbal, peer and self-evaluations, feedback during the design process, and variations in the jury format. Historically, design critique has come in the form of the design “jury” during which students present their work in front of an audience composed of professors, peers and invited professionals. The most significant investigation of this tradition, Kathryn Anthony’s Design Juries on Trial provides an exhaustive exploration based upon surveys, interviews, student diaries and behavioral observation. Anthony’s research outlines many of the limitations of design juries: students often find them discouraging, confusing, boring and educationally ineffective. In addition, Anthony identifies the disagreement that exists among students, faculty and practitioners about the purpose of the jury process and its role in educating designers. She suggests that the design jury tradition may bear some responsibility for “driving away many qualified women and people of color...”. As possible solutions to these problems, Anthony proposes a series of improvements to traditional design juries, as well as an array of alternative approaches for design educators to consider. Explorations that are more recent include architect Helena Webster’s 2007 article “The Analytics of Power” which examines the roles of faculty, students and guest critics in the design jury. Webster’s yearlong ethnographic study found that juries are dominated by faculty and guest critic commentary, while students are typically uninvolved in the peer analysis. Webster identifies “passive compliance,” “active compliance” and “active resistance” as the three primary ways that presenting students deal with critical commentary. She explains that regardless of tactic or ability, students focus primarily upon “gaining the best possible outcome” and not necessarily upon learning or becoming better designers. She concludes that design juries reinforce and objectify the “power differential between critic and student” and thereby distort learning outcomes. Alarmed by her findings, Webster proposes replacement of the design jury with “a new set of pedagogic events that are carefully constructed to support student learning.” She lists a series of alternative approaches suggested by students, including peer reviews, exhibitions, special tutorial days and self-evaluation exercises. Additional studies have investigated the connection between graphic quality and success in design juries, explored the essential nature 5 and substance of design critiques, and analyzed verbal forms of design evaluation. This paper builds upon prior research by evaluating a series of alternative or supplementary critique approaches, and comparing them to students’ perceptions of the design jury. The purpose of this research is to assist design educators in making informed selections of evaluation techniques, and to foster a continuing dialogue regarding best practices for design studio instructors. Methodology The student surveys were conducted department-wide at Mississippi State University’s Department of Landscape Architecture in the fall of 2007 and at Hampton University’s 1 Kathryn Anthony, Design Juries on Trial: The Renaissance of the Design Studio (New York: Van Nostrand Reinhold, 1991). 2 Helena Webster, “The Analytics of Power: Re-presenting the Design Jury,” Journal of Architectural Education 60, no. 3 (2007): 21-27. 3 This paper was largely inspired by the students’ suggestions and is intended to further understanding of a variety of evaluation techniques by comparing them to students’ perceptions of the design jury. 4 Meltem O. Gurel and Inca Basa, “The Status of Graphical Presentation in Interior/Architectural Design Education,” International Journal of Art and Design Education, 23 no. 2 (2004): 192-206. 5 Jeffrey Karl Ochsner, “Behind the Mask: A Psychoanalytic Perspective on Interaction in the Design Studio, Journal of Architectural Education, 53 no. 4 (2000): 194-206. 6 Belkis Uluoglu and Taksim Taskisla, “Design Knowledge Communicated in Studio,” Design Studies, 21 no. 1 (2000): 33-58. 7 See Deanna P. Dannels, “Performing Tribal Rituals: A Genre Analysis of ‘Crits’ in Design Studios,” Communication Education, 54 no. 2 (2005): 136-160 and Janne Morton and David O’Brien, “Selling your Design: Oral Communication Pedagogy in Design Education,” Communication Education, 54 no. 1 (2005): 6-19. Department of Architecture in the fall of 2008. Surveys were administered online and consisted primarily of Likert-scaled statements aimed at determining students’ perceptions of the educational value, fairness and motivational aspects of each technique. Students were given a definition (see Figure 1) of each type of feedback and asked to rank the techniques based upon perceived effectiveness and in order of preference. In addition, students were invited to justify their choices of most and least preferred techniques. The survey took students an average of fifteen minutes to complete. Fifty-three of one hundred and twenty seven enrolled landscape architecture students responded to the survey, while fifty-four of one hundred and ninety four enrolled architecture students responded. The two groups of respondents differed substantially in terms of field of study, ethnicity (Figure 2), and gender (Figure 3). Figure 1: Definitions Provided to Students in the Survey Evaluation Technique Definitions: Gallery Review: Review of a completed project in which the student’s work is displayed and both professionals and faculty critique the work individually or in small groups. One-on-One Desk Critique: Critique in the studio between the professor(s) and student during the design process. One-on-One Evaluation: Critique of a completed project involving the student and the professor(s). Studio Pin-up: Informal studio critiques in the design process typically involving the entire class or large groups within the class. These critiques could involve the professor(s), additional faculty, or invited guests. Traditional Design Jury: Oral and graphic presentation of a completed project to a jury of qualified professionals, which could include professors, additional faculty, invited professionals or other guests as well as an audience consisting of the student’s classmates. Verbal Peer Evaluation: Verbal critiques of a project by one or more of the student’s classmates. Written Evaluation by the Professor: An in-depth, written critique of a project by the professor(s). Written Peer Evaluation: Written critiques of a project by one or more of the student’s classmates. Written Self Evaluation: Written critique of the student’s own project. Figure 2: Ethnicity of Survey Respondents 0 5 10 15 20 25 30 35 40 45 50 Caucasian African American All Others Ethni


Planning for higher education | 2010

Strategic by Design: Iterative Approaches to Educational Planning.

Shannon M. Chance


Planning for higher education | 2012

Planning for Environmental Sustainability : Learning from LEED and the USGBC

Shannon M. Chance


International Journal of Educational Advancement | 2008

Proposal for using a studio format to enhance institutional advancement

Shannon M. Chance


The College Student Affairs Journal | 2009

Keeping (or Losing) the Faith: Reflections on Spiritual Struggles and Their Resolution by College Seniors.

Jodi Fisler; Holly Alexander Agati; Shannon M. Chance; Amie E. Donahue; Gregory A. Donahue; Eric J. Eickhoff; Sara E. Kolb Gastler; Joseph C. Lowder; John D. Foubert


Archive | 2012

Learning outcomes from a multidisciplinary, hands-on, think tank

Shannon M. Chance; John Marshall; James P. Barber


New Directions for Higher Education | 2014

Bringing It All Together Through Group Learning

Shannon M. Chance

Collaboration


Dive into the Shannon M. Chance's collaboration.

Top Co-Authors

Avatar

Brian Bowe

Dublin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gavin Duffy

Dublin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mike Murphy

Dublin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eddie Conlon

Dublin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge