CommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity
Mahmood Jasim, Pooya Khaloo, Somin Wadhwa, Amy X. Zhang, Ali Sarvghad, Narges Mahyar
1111CommunityClick: Capturing and Reporting CommunityFeedback from Town Halls to Improve Inclusivity
MAHMOOD JASIM,
University of Massachusetts Amherst, USA
POOYA KHALOO,
University of Massachusetts Amherst, USA
SOMIN WADHWA,
University of Massachusetts Amherst, USA
AMY X. ZHANG,
University of Washington, USA
ALI SARVGHAD,
University of Massachusetts Amherst, USA
NARGES MAHYAR,
University of Massachusetts Amherst, USALocal governments still depend on traditional town halls for community consultation, despite problems such asa lack of inclusive participation for attendees and difficulty for civic organizers to capture attendees’ feedbackin reports. Building on a formative study with 66 town hall attendees and 20 organizers, we designed anddeveloped CommunityClick, a communitysourcing system that captures attendees’ feedback in an inclusivemanner and enables organizers to author more comprehensive reports. During the meeting, in addition torecording meeting audio to capture vocal attendees’ feedback, we modify iClickers to give voice to reticentattendees by allowing them to provide real-time feedback beyond a binary signal. This information thenautomatically feeds into a meeting transcript augmented with attendees’ feedback and organizers’ tags.The augmented transcript along with a feedback-weighted summary of the transcript generated from textanalysis methods is incorporated into an interactive authoring tool for organizers to write reports. Froma field experiment at a town hall meeting, we demonstrate how CommunityClick can improve inclusivityby providing multiple avenues for attendees to share opinions. Additionally, interviews with eight expertorganizers demonstrate CommunityClick’s utility in creating more comprehensive and accurate reports toinform critical civic decision-making. We discuss the possibility of integrating CommunityClick with townhall meetings in the future as well as expanding to other domains.CCS Concepts: •
Human-Centered Computing → Human Computer Interaction (HCI) .Additional Key Words and Phrases: Town hall; automatic transcription; iClicker; community feedback
ACM Reference Format:
Mahmood Jasim, Pooya Khaloo, Somin Wadhwa, Amy X. Zhang, Ali Sarvghad, and Narges Mahyar. 2018.CommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity.
J.ACM
37, 4, Article 111 (August 2018), 32 pages. https://doi.org/10.1145/1122445.1122456
Traditional community consultation methods, such as town halls, public forums, and workshops,are the modus operandi for public engagement [52, 94]. For fair and impartial civic decision-making,
Authors’ addresses: Mahmood Jasim, University of Massachusetts Amherst, Amherst, MA, USA, [email protected];Pooya Khaloo, University of Massachusetts Amherst, Amherst, MA, USA, [email protected]; Somin Wadhwa, Universityof Massachusetts Amherst, Amherst, MA, USA, [email protected]; Amy X. Zhang, University of Washington,Seattle, Washington, USA, [email protected]; Ali Sarvghad, University of Massachusetts Amherst, Amherst, MA, USA,[email protected]; Narges Mahyar, University of Massachusetts Amherst, Amherst, MA, USA, [email protected] to make digital or hard copies of all or part of this work for personal or classroom use is granted without feeprovided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice andthe full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requiresprior specific permission and/or a fee. Request permissions from [email protected].© 2018 Association for Computing Machinery.0004-5411/2018/8-ART111 $15.00https://doi.org/10.1145/1122445.1122456 J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. a r X i v : . [ c s . H C ] S e p the inclusivity of community members’ feedback is paramount [60, 94, 126]. However, traditionalmethods rarely provide opportunities for inclusive public participation [30, 87, 95]. For instance,reticent meeting attendees struggle to speak up and articulate their viewpoints due to fear ofconfronting outspoken and dominant individuals [27, 127]. This lack of inclusivity in traditionalface-to-face meetings results in an uneven representation of community members and often fails tocapture broader perspectives of attendees [70]. As a result, these methods often fall short in achievingthe desired exchange of perspectives between government officials and the community [76, 118,119]. Furthermore, meeting organizers grapple with simultaneously facilitating often contentiousdiscussions and taking meeting notes to capture attendees’ broader perspectives [70, 95]. Thesebottlenecks further obstruct inclusivity and may lead to biased decisions that can significantlyimpact people’s lives [94, 95]. Advancements in computer-mediated technology can address thispredicament by creating a communication channel between these entities.Bryan [29] and Gastil [56] investigated the state of town halls and demonstrated a steady declinein civic participation due to the growing disconnect between local government and the commu-nity. To reengage disconnected, reticent, or disenfranchised community members, researchers inHCI and digital civics have offered novel strategies and technological interventions to increaseengagement [60, 62, 94, 107, 130]. Researchers in this field have proposed several online technolo-gies that made wider participation possible for community members (e.g., [4, 5, 7, 93]). Despitethe introduction of such online platforms, government officials and decision-makers predomi-nantly favor traditional face-to-face meetings to create relationships, foster discourse, and conductfollow-up conversations with community members [16, 40, 94] to understand their views andaspirations [52, 69, 131]. However, employing technology to capture attendees’ feedback—in partic-ular, silent attendees’ feedback—in face-to-face meetings remains largely unexplored. Commonly,feedback in meetings is gathered using voting or polling attendees [25, 90, 103] or taking notesduring the meeting [29, 91, 96]. However, voting often restricts attendees to only agreeing ordisagreeing, which often does more harm to the richness of the captured feedback from attendeesrather than promoting inclusivity [22, 94]. To help alleviate this problem, prior work mostly focusedon automatic speech recognition [3, 128] and interactive annotations [19, 74] to help organizerstake notes for creating reports. However, these methods rarely preserve the discussion context orimprove the inclusivity of attendees’ feedback.To better understand the needs of attendees and organizers in community consultations andexplore how technology can help to address these issues, we conducted a formative study byattending three town halls in a college town in the United States. We surveyed a total of 66 attendeesto inquire about their ability to voice their opinions during town halls. 17% of the attendees (11responses) expressed that despite being physically present, they could not voice their opinions.They attributed this to factors such as intimidation from other participants, lack of confidence, andfear of interruption. Moreover, we surveyed 20 organizers to identify what could help them to makebetter use of the town halls to ensure that the public feedback is prioritized in the reports theyauthored. We found that the organizers often relied on their memories or the notes taken duringthe meeting to generate reports. However, they struggled to accurately capture or remember theattendees’ feedback and important details after the meeting. In effect, these incomplete memoriesand meeting notes could potentially result in incomprehensive reports. These findings indicated a In this work, we use the term
Town Hall to refer to various community consultation approaches where community membersconvene to meet and discuss civic issues with government officials or decision-makers. Town Halls can take many forms, butthe main goal is to establish a communication channel between the officials and the community, to inform the communityabout civic issues, and often to receive community’s feedback [29]. Digital Civics is an emerging interdisciplinary area that explores novel ways to utilize technology for promoting broaderpublic engagement and participation in the design and delivery of civic services [62, 107, 130].J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:3 requirement for better capturing attendees’ feedback and preserving meeting discussion details tosupport organizers in authoring more comprehensive reports.Based on our formative study, we designed and developed CommunityClick, a system for townhalls that captures more inclusive feedback from attendees during the meeting and enables organiz-ers to author more comprehensive meeting reports. We modified iClickers [9] to allow attendees tosilently and anonymously provide feedback and enable organizers to tag the meeting discussionat any time during the meeting. We chose to use iClickers due to their familiarity and ease ofuse [65, 102, 134] as an audience response system (ARS) [106]. Furthermore, we augmented theautomatically-generated meeting transcript by synchronizing it with attendees’ feedback andorganizers’ tags. We also provided an interactive interface where organizers can explore, analyze,and utilize the augmented meeting transcript in a data-driven way and a novel feedback-weightedsummary of the meeting discussion to author meeting reports at their convenience. To evalu-ate the efficacy of our approach, we conducted a field experiment in the wild, followed by eightsemi-structured interviews with experienced organizers. Our results demonstrate the efficacy ofCommunityClick to give voice to reticent participants to increase their involvement in town halls,capture attendees’ feedback, and enable organizers to compile more inclusive, comprehensive, andaccurate meeting reports in a way that lends credibility to the report creation process.Our key contributions in this work are as follows: 1) using a communitysourcing technology toenable attendees to share their feedback at any time during the meeting by modifying iClickersas a real-time response mechanism, 2) augmenting meeting transcripts by combining organizers’tags and attendees’ feedback to preserve discussion context and feedback from a broader rangeof attendees, 3) applying a novel feedback-weighted summarization method to generate meetingsummaries that prioritize community’s feedback, 4) developing an interface for organizers toexplore and utilize the augmented meeting transcript to author more comprehensive reports, and5) insights from a field experiment in the wild that demonstrates how technology can be effectivein capturing attendees’ voices, authoring more inclusive reports, and future directions to enhanceour approach.
In this section, we describe the current challenges that inhibit inclusivity in town hall meetings. Wealso discuss existing technologies designed to promote engagement in various meeting scenariosand to help analyze and utilize meeting data.
Prior investigations by Bryan [29] and Gastil [56] showed a steady decline in civic participation intown halls due to the growing disconnect between local government and community membersand the decline in social capital [43, 111, 113]. Despite the introduction of online methods toincrease public engagement in the last decade [4, 5, 7, 37, 81, 93], government officials continue toprefer face-to-face meetings to engage the community in the decision-making process [32, 52, 94].They believe face-to-face meetings facilitate two-way communications between decision-makersand community members that can foster discourse and help them understand the views andaspirations of the community members [32, 52, 69, 94, 131]. However, constraints such as fixedphysical locations for co-located meetings and scarcity of time and resources, limit the efficacy offace-to-face processes and inhibit the ability of officials to make proper use of town halls [18, 70,87, 95, 120]. Such constraints might repel or alienate citizens for whom traveling or dedicating timefor town halls may not be a viable option from the perspective of physical, economical, or intrinsicinterest [38, 71, 100, 120]. While predictors such as education, income, and intrinsic interest helpgauge civic engagement [45], social dynamics, such as shyness and tendency to avoid confrontation
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. with dominant personalities can also hinder opinion sharing in town halls by favoring privilegedindividuals who are comfortable or trained to take part in contentious public discussions [27, 127].Specifically, people with training in analytical and rhetorical reasoning often find themselvesat an advantage when discussing complex and critical civic issues [119]. As a result, town hallsinadvertently cater to a small number of privileged individuals, and silent participants often becomedisengaged despite physically attending the meetings [61]. Due to the lack of inclusivity, theoutcome of such meetings often tends to feel unjust and opaque for the general public [39, 54].
To increase broader civic participation, researchers in HCI have proposed both online [4, 5, 7, 81, 93]and face-to-face [21, 80, 91, 125] technological interventions that use the communitysourcing approach. For instance, to increase engagement in town halls, some researchers have experimentedwith audience response systems (ARS) [25, 77, 80, 103]. Murphy used such systems to promotedemocracy and community partnerships [103]. Similarly, Boulianne et al. deployed clicker devicesin contentious public discussions about climate change to gauge public opinions [25]. Bergstrom etal. used a single button device where the attendees anonymously voted (agree/disagree) on issuesduring the meeting. They showed that back-channel voting helped underrepresented users getmore involved in the meeting [22]. The America Speaks ’ public engagement platform, 21 CenturyTown Meeting®, also used audience response systems to collect feedback and perform strawpolls and votes during town halls [90]. However, in these works, the audience response systemswere used either for binary voting or polling [22, 90], or to receive feedback on specific questionsthat expected attendees’ feedback on a Likert scale-like spectrum [88]. These restrictions limitwhen and how meeting attendees can share their feedback. Audience response systems have seenwidespread success as a lightweight tool to engage participants, promote discussions, and invokecritical thinking in the education domain [14, 65, 79, 101, 102, 134]. As such, these devices havethe potential to provide a communication channel for silent participants to express their opinionswithout the obligation to verbalize and risk confrontation. We build upon their success in theeducation domain by appropriating iClickers for the civic domain. We modify and utilize iClickersfor town halls to create a mechanism that supports silent and real-time community feedback.HCI researchers have proposed research solutions to increase participation in face-to-facemeetings such as design charrettes, or group-meetings in general, by using various tools andmethods [21, 22, 67, 92, 122]. Some researchers used interactive tabletop and large screen surfacesto engage attendees [67, 92, 122]. For example, UD Co-Spaces [92] used a tabletop-centered multi-display environment for engaging the public in complex urban design processes. Memtable [67]used a tabletop display to increase attendees’ engagement by integrating annotations using a multi-user text editor on top of multimedia artifacts. IdeaWall visualized thematically grouped discussioncontents in real-time on a large screen during the meeting to encourage attendees to discuss suchtopics [122]. However, large displays along with real-time visualizations might distract attendeesfrom concentrating on meeting discussions [15], which might lead to less contribution from theparticipants irrespective of how vocal they are [61]. Furthermore, these innovative approaches mightbe overwhelming for meeting attendees, especially in the civic domain, due to the heterogeneityof participants with a wide spectrum of expertise and familiarity with technology [13, 42, 127]. Itmight also be impractical and financially infeasible to use expensive tabletops and large interactivedisplays for town halls organized in a majority of cities. Communitysourcing leverages the specific knowledge of a targeted community. It follows the crowdsourcing model, buttakes a more direct approach in harnessing collective ideas, input, and feedback from targeted community members to findsolutions to complex problems such as civic decision-making. [28, 64]J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:5
Prior works showed that decision-makers often relied on meeting reports generated by organizersto inform public policies that had potential long-term impacts on the community [91]. Commonly,organizers generate these reports based on the outcome of voting or polling attendees [25, 90, 103],and taking notes during the meeting [29, 91, 96]. They often use manual note-taking techniquessuch as pen-and-paper or text editors [47]. However, simultaneously taking notes from the meetingsto capture attendees’ feedback and facilitating the meeting to guide and encourage discussions isoverwhelming and often lead to losing critical information and ideas [110, 117]. Sometimes themeetings are audio-recorded and transcribed for reviewing before writing the report. However,such reviewing processes require significant manpower and labor [90, 91]. Furthermore, the audiorecording themselves do not capture the feedback of reticent participants who did not speak up.In general meeting scenarios, researchers proposed improvements to manual note-taking forcapturing meeting discussions [2, 35, 46, 75]. For example, LiteMinutes used a web interface thatallowed creating and distributing meeting contents [35]. MeetingKing provided meeting agendatemplates to help organizers create reports [2]. Similarly, LiveNotes facilitated note-taking using ashared whiteboard where users could write notes using a digital pen or a keyboard [20, 75]. Anothergroup of researchers experimented with various tools and techniques to provide support forfacilitating online synchronous group discussion. For example, SolutionChat provides moderationsupport to group discussion facilitators by visualizing group discussion stages and featured opinionsfrom participants and suggesting appropriate moderator responses [86]. Bot in the Bunch takesa different approach and propose a chatbot agent that aims to enhance goal-oriented onlinegroup discussions by managing time, encouraging participants to contribute to the discussion,and summarizing the discussion [82]. Similarly, Tilda synthesizes online chat conversation usingstructured summaries and enable annotation of chat conversation to improve recall [135].Closer to our approach, some researchers proposed different techniques for automatically cap-turing meeting discussions without significant manual intervention [1, 3, 19, 128]. For example,CALO and Voicea are automated systems for annotating, transcribing, and analyzing multipartymeetings [3, 128]. These systems use automatic speech recognition [112] to transcribe the meetingaudio and extract topics, question-answer pairs, and action items using natural language process-ing [66]. However, automatic approaches are often error-prone and susceptible to miscategorizationof important discussion topics [36, 97]. To address this problem, some researchers suggested in-corporating human intervention to control and compile reports without completely dependingon automatically generated results [1, 19, 74]. SmartNotes [19] and commercial applications suchas ICompassTech [1] use this approach where the users can add and revise notes and topics. Al-though these methods used a combination of automatic and interactive techniques, they enabledthe utilization of meeting discussions without capturing sufficient circumstantial input. The auto-matically generated transcript might record discussions but it may not contain feedback shared byall attendees, especially silent ones. Furthermore, these tools are designed for small-scale groupmeetings and may not be practical in town halls in the civic domain where attendee numbers andrequirements can vary significantly [29, 91]. The heterogeneity of attendees in town halls [13, 42],the lack of inclusivity in sharing their opinions [76, 119], the deficient methods to record meetingdiscussions [90], and limited financial and design pragmatism in existing technologies [92, 122]necessitate a closer investigation and innovation in designing technology to address both meetingattendees’ and organizers’ challenges regarding inclusivity in town halls.
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
To inform our work, we wanted to understand the perspectives and needs of organizers andattendees who participate in community consultations. To this end, we attended several town halls,including multiple public engagement sessions in a college town in the United States (U.S.). Theagenda for these town halls included discussions regarding new public school buildings where thetown authorities wanted the community’s feedback on two new proposals that involved millionsof dollars worth of renovation or reconstruction. The town authorities made careful considerationsto ensure that the public has access to all the information about the proposals by arranging sixcommunity-wide engagement sessions organized by professional facilitators. We joined three outof six of these sessions in a span of three months. We decided to investigate this particular casedue to the unique characteristics of the town, where there is relatively high citizen engagementon discussions around education and public schools. We also wanted to investigate how peoplediscuss potentially contentious proposals in the town halls that pitted two contrasting ideas forfuture public school buildings.
We approached both the meeting attendees, who were community members, and the meetingorganizers, who included facilitators and town officials. The town halls began with the organizerspresenting their proposal to attendees. Afterward, attendees engaged in discussions and sharedtheir ideas, opinions, and concerns about the proposals. The facilitators were responsible for guidingthe discussion and collecting the attendees’ feedback.They played a neutral role in the discussionand did not influence the attendees’ opinions in any way.To understand the attendees’ perspectives, we conducted surveys after each town hall. Theattendees were all residents of the town and attended the meeting of their own volition. Wesurveyed a total of 66 attendees. We refer to the attendees from our formative study as FA andorganizers from our formative study as FO . In the survey, we asked them open-ended questionsregarding their motivation to join the town hall meetings, their experiences in these meetings,and their thoughts around what makes these face-to-face meetings successful. We also askedabout their ability to voice their opinions in town halls and about their familiarity with audienceresponse technologies. Furthermore, to gain an understanding of what type of semantic tags theywanted to provide during a town hall, we compiled a list of tags based on prior work on meetingdiscussions or public engagements that focused on characterizing effective participation duringsynchronous communication between organizers and meeting attendees in both online and face-to-face settings [68, 72, 104, 105, 135, 136]. The list of tags is presented in Fig. 1(B). We providedthis list to the attendees and asked them to rate which of these tags would help them to expresstheir thoughts in town halls. They rated each tag on a 5-point Likert scale [88].During these town halls, we made contact with the organizers and explained our project goals.From these initial contacts, we used the snowball method [59] to reach other organizers workingacross the U.S. to learn more about their practices regarding town hall meeting organization,facilitation, and report creation. We conducted surveys with a total of 20 organizers with anaverage experience of 10.5 years (min 1 year, max 35 years) in conducting town halls across the U.S.Our survey participants consisted of town administrators, town clerks, senior planners, directors,professional facilitators, and chairs of different town committees. We asked them open-endedquestions around their meeting data recording practices including what data do they prioritize whenrecording, the challenges they face while organizing the meeting simultaneously and recordingmeeting notes, and their post-meeting data analysis and meeting report generation processes.We also asked about their choice of tools and technologies for recording meeting notes and for J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:7
AgreeDisagreeUnsureNeutralImportantConfusedExcitedAngryBoredDiscussionTopicAction ItemsMain IssueDateNew IdeaQ/ACommentGood PointAnnouncement (A) (B)0% 25% 50% 75% 100% 0% 25% 50% 75% 100%
Fig. 1. This figure presents attendees’ and organizers’ perceived importance of two sets of tags that wecompiled based on prior research: (A) shows the ratings of 20 organizers on a list of 10 tags for organizers; (B)shows the ratings of 66 Community members on a list of 9 tags for attendees’ feedback. creating reports, and what they think are the elements that constitute a good report. To identifyrepresentative tags that could help organizers track and categorize meeting discussion, we compiledanother list of tags based on prior works [68, 72, 104, 105, 135]. We used a different list from the onewe provided to attendees (Fig. 1(A)) because prior work suggest that organizers and attendees needdifferent tags to categorize meeting discussion based on their perspectives. We asked organizers torate the tags in the order of importance on a five-point Likert scale [88]. The survey questions andlist of tags are provided as supplementary materials.
Here, we report the findings from our surveys with both attendees and organizers. The 66 attendeeswe surveyed were highly motivated to attend town halls and the majority of them (64%, 42 responses)considered attending such meetings to provide their feedback on civic issues as their civic duty .Most of the attendees (88%, 58 responses) attended two or more town halls every year. Regardingtheir familiarity with technology, every meeting attendee (100%, 66 responses) mentioned having acomputer, smartphone, and internet access, but 61% of them (40 responses) never used an audienceresponse system before. We also found that 17% (11 responses) of meeting attendees felt they werenot able to share their feedback during these meetings, and 23% (15 responses) were not satisfiedwith the way town halls were organized to discuss critical issues. It was surprising for us to find that17% of people from a homogeneous, relatively wealthy, and educated community in a college townbelieved that they could not voice their opinions during town halls. Despite their unfamiliarity withaudience response systems, the majority of the meeting attendees (87%, 57 responses) mentionedthat they were willing to use such devices in town halls to share their feedback.In response to the question regarding what makes face-to-face town hall meeting successful, agroup of attendees mentioned that the success of town hall meetings hinges upon the attendees’ability to openly communicate their opinions and hear others’ opinions. One attendee (FA-17)mentioned, “
Town halls need to provide opportunities for all people to be heard, having diversity ofvoices and opinions, and be present in the discussion. ” Another attendee (FA-36) mentioned, “
Beingface-to-face means asking questions, listening to others’ questions and ideas, and to be able to seetheir emotions, nuances and expressions. ” Several attendees’ mentioned facing challenges aroundsharing their opinions due to being dominated in the discussion. One such attendees’ mentioned(FA-23), “
Town halls give us the chance to talk things out, but often this doesn’t happen and peopleget shut down. ” Another attendee (FA-11) mentioned, “
You need to have ground rules so that westay on track and no one dominates the discussion. ” Some attendees considered that facilitatorsplay an important role in the success of town halls and skilled and well-equipped facilitators can
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. make a difference. One attendee (FA-57) emphasized the importance of skilled facilitator saying,“
Professional facilitators understand the context of our community, and the history of tension regardingthe issues. They move it along and listen with open minds. ” Another attendee (FA-47) mentioned,“
Skilled facilitators make sure all voices are heard, organize the discussion around goals and keepeveryone focused on tasks. ”When rating the tags for sharing opinions during meetings (Fig. 1 (A)), the majority (90%) ofthe attendees considered
Agree and
Disagree to be the most important tags. 75% or more attendeesthought
Unsure , Important , Confused , and
Neutral to be important.The majority of the organizers (17 responses) mentioned that often manpower and budgetconstraints forced them to forego the appointment of designated note-takers and thus, they mustshuffle between organizing and note-taking during the meetings. Some organizers also mentionedhow context-switching between organizing the meeting and taking notes often led to missingcritical evidence or information (8 responses). They employed a variety of methods for recordingmeeting data depending on their convenience and their operational abilities and experiences toutilize such methods, including pen-and-paper (5 responses), text editors (6 responses), audio orvideo recorders (3 responses), or a combination of these methods. To generate reports from the notestaken during the meetings, organizers usually used text editors (17 responses) with preferencestowards Microsoft Word and Notepad (12 responses).However, when asked about the time required to compile a report from meeting notes, theirresponses varied from 15 minutes to a few days. The variation, in part, can be attributed to theamount of notes and the format in which notes were captured. All organizers (20 responses) men-tioned that the report generation process involves some form of summarization process of meetingrecords to retain the most important discussion components. One organizer (FO-7) explained, “
IfI’m responsible for the report, I listen to the recording, review the notes, and translate them into coherentaccounts of meeting attendees, topics, discussion, and decisions/outcomes. ”. Another organizer (FO-1)mentioned, “
I start with the agenda, review the audio, then edit and summarize the meeting notes intoreport content. ”Organizers also described high-level properties of what would constitute a good report . Oneorganizer (FO-1) mentioned that, good meeting reports should be “ accurate, comprehensive, clear,and concise ”. Another organizer (FO-4) emphasized that several components constitue a goodreport including, “ [the] main agenda items, relevant comments, consensus, action plans, and feedback ”.One organizer (FO-17) thought meeting reports should be “ balanced, fair, and pertinent ” towardsmultiple perspectives, however, it is often challenging because they only “listen to a few, whileothers stay silent”.When rating tags that would help them to take better notes to create good reports (Fig. 1 (B)),the meeting organizers unanimously (100%) endorsed the tag
Decision . The tags
Topic , Action Items , Main Issue , and
Date were preferred by more than 70% organizers. However, all other tags except for
Q&A were favored by more than 50% of organizers. These survey responses suggest that preferencesfor tagging meeting discussion varies among organizers, and different sets of tags might be requiredto be useful for generating reports from different meetings with diverse agendas.
Based on prior work and our formative study, we identified four design goals to guide our systemdesign that address the requirements and challenges of both meeting attendees and organizers.First, we found that some attendees lacked a way to respond to ongoing discussions. Furthermore,many organizers struggled to keep track of the discussion and take notes simultaneously. Thus, weneeded to provide communication channel between attendees and organizers for sharing opinionsand capturing feedback (G1). Second, the organizers often refer to several sources of meeting data
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:9
Meeting AttendeesOrganizerMeeting Audio Attendee FeedbackOrganizer’s TagsMeeting Transcript
Augmented Transcript Topic ExtractionText Summarizer
Org. Tag Attendee Feedback
Fig. 2. A snapshot of CommunityClick’s workflow. During the meeting, attendees and organizers can useiClickers to share feedback and tag the meeting respectively. The meeting is also audio-recorded for transcrip-tion. The audio recordings are transcribed automatically and then augmented with the organizer’s tags andattendees’ feedback. Furthermore, we generated the feedback-weighted discussion summary and extractedthe most relevant topics. The interactive interface enables the exploration and utilization of augmentedmeeting discussions, which is available online for organizers to examine and author meeting reports. including meeting notes and audio/video recording to compile meeting reports. Hence, the meetingdiscussion audio, organizers’ tags, and attendees’ feedback should be captured and combinedtogether to provide organizers with a holistic view of the meeting data (G2). Third, the organizersperform some form of summarization to generate meeting reports. However, many organizersalso struggled to account for attendees’ feedback while summarizing meeting data. This challengemotivated a third goal to introduce summarization techniques that incorporate attendees’ feedbackto help organizers to get the gist of meeting discussions (G3). Finally, organizers needed to examinethe meeting-generated data to capture more inclusive attendees’ feedback so that they could writemore comprehensive reports. To that end, our final design goal was to provide exploration andreport authoring functionalities to help them investigate the meeting data and identify evidence togenerate reports that included and reflected attendees’ feedback (G4).
Guided by the design goals, we designed and developed CommunityClick, a system where wemodified iClickers as a real-time response mechanism and augmented meeting transcripts to capturemore inclusive feedback. We also introduced a novel feedback-weighted summarization to prioritizeattendees’ feedback and enabled exploration and utilization of the augmented transcript through aninteractive interface for organizers to author meeting reports. Here, we provide a scenario whereCommunityClick can be employed (Fig. 2), followed by the system description.
Michelle is an organizer who has been appointed by the local government officials to organize animportant town hall. Given the importance of the meeting, she decides to deploy CommunityClickto focus on facilitating the meeting while using iClickers to capture the community’s feedback.
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
Adam is a community member who is attending the town hall. He cares about the communityand wants to share his opinions on the agenda. He prefers to avoid confrontations, especially intown halls, as he is worried about speaking up and running into arguments. In the meeting, heis given an iClicker and instructions for using it to share his opinions using five options. Adamengages in discussion with other attendees in the meeting but whenever he feels hesitant to speakup, he uses the iClicker to provide feedback.A week later, Michelle finally gets around to writing the report of the town hall. By now, shehas forgotten a significant portion of the meeting discussion. She logs in to CommunityClick andselects the town hall. She uses the timeline and feedback-weighted summary to get an overview ofthe meeting discussion and jog her memory by exploring the meeting discussion. She uses her owntags, timeline, and the interactive summary to investigate the augmented meeting transcript thatcontained attendees’ feedback alongside the discussion segments. Finally, she authors the report byimporting information into the text editor from the transcript. (A) (B) (C)
Fig. 3. The apparatus used to capture organizers’ tags and attendees’ feedback. (A) The iClicker for organizersto tag the meeting. (B) The iClicker for attendees to add their feedback. We used different sets of tags fororganizers and attendees based on our formative study. Each iClicker was labeled with the respective set oftags to reduce the cognitive load of mapping options to the iClicker buttons. (C) The iClicker recorder. Weused an Adafruit Feather M0 with the 900 MHz RFM69W/RFM69HW transceiver to capture iClicker clickswith timestamps in real-time to synchronize tags and feedback with meeting audio.
In the following, we describe how we addressed the design goals by using iClickers, augmentingmeeting transcripts, performing text analysis, and developing an interactive interface.
Weused iClickers for both organizers and attendees to enable them to respond to meeting discussionsany time during the meeting without the need to manually take notes or speak up to share opinions.iClicker is a communication device that uses radio frequency to allow users to anonymouslyrespond using its five buttons (Fig. 3). Despite the widespread usage of smartphones, we choseiClickers as an audience response system due to recent statistics that show 20% of U.S. citizens donot yet have access to smartphones [6]. Furthermore, they are often a major cause of distractionand hindrance to participation in meetings [17, 85]. There are also technical overheads involved
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:11 including installation of application, maintenance, and issues regarding version compatibility basedon operating systems which might disengage the participants. In contrast, iClickers have proven tobe successful in town halls due to its familiarity and affordance of anonymity in sharing opinions intown halls and to receive attendees’ feedback on specific questions [22, 25, 103]. The anonymous useof iClickers could ensure that silent participants can also share their opinions to the organizer aboutthe ongoing discussion without engaging with a potentially heated debate. Moreover, we modifiedthe iClickers to go beyond previous approaches by allowing meeting organizers and attendees torespond to the ongoing discussion using all five different options instead of only binary agree ordisagree. The tag options are customizable and depending on their meeting agendas, the organizerscan set up an appropriate list of tags and feedback before the meeting. We used different typesof iClickers for organizers and attendees. The organizers used instructor iClickers and attendeesused regular ones (Fig. 3). It helped us to effectively separate organizers’ and attendees’ responses.To reduce the cognitive load of iClicker users to map and remember the options, we labeled eachiClicker with organizers’ tags and attendees’ feedback.
We recorded and synchronized three different sets of data generated simultaneouslyin the meeting—the discussion audio, the organizers’ tags, and the attendees’ feedback via iClickers.We recorded the meeting audio using a regular and commonly used omnidirectional microphone.To remove the noise from the audio recording, we used an open-source freeware named Audacity®.However, capturing organizers’ tags and attendees’ feedback from iClickers was non-trivial due tothe limitations in hardware access and the API provided by the iClicker manufacturer. The originalsoftware and hardware in factory settings did not provide timestamped data on each click. As aresult, we customized the hardware and API to record organizers’ tags and attendees’ feedback(Fig. 3). We used an Adafruit Feather M0 with the 900 MHz RFM69W/RFM69HW transceiver tocollect an iClicker’s clicks and timestamps that were transmitted through radio frequency on thesame bandwidth. This allowed us to accurately and precisely capture and synchronize iClickerinteractions to match the time of discussion.To transcribe the meeting audio, we used automatic speech recognition techniques from AssemblyAI [8]. We assessed the quality of this method by comparing them to human-generated referencetranscripts. We found our approach to be on-par with human-generated transcripts. The resultsof these analyses are presented in full in the Appendix section. We combined the transcript withthe timestamped tags and feedback to transform the recorded meeting audio into timestampedtext. Furthermore, we used the organizers’ tags to divide the meeting transcript into manageableand consumable segments. Previous work showed that there is a gap (2 seconds on average)between hearing something, registering it, and taking actions upon it, such as clicking a buttonfor annotation [115]. Based on prior work and our early pilot experiments, for each organizer’stag, we created a 30 second time window around the tag (2 seconds before the tag and 28 secondsafter the tag). The complete meeting transcript is divided into similar 30-second segments. For eachsegment, we collected the attendees’ feedback provided within that time-window. Consequently, themeeting audio is transformed into timestamped segments, each containing transcribed conversation,organizers’ tags, and a set of attendees’ feedback (Fig. 4(E)). We also extracted the main discussionpoints from the transcript segments using Topic Rank [24] (Fig. 4(B)). We chose this topic modelingmethod to have better multi-word topics which are useful for better topic representation [23].
To summarize the meeting transcript, we used the graph-based TextRankalgorithm [99], which is based on a variation of the PageRank [108] algorithm. TextRank is knownto deliver reasonable results in summarizing meeting transcripts [55] in unsupervised settings.
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
However, it treats all input text the same without any domain-specific consideration. Our goal wasto incorporate attendees’ feedback in the summarization process so that the resultant summary wasweighted by attendees’ feedback. To that end, we added two critical modifications to the originalmethodology: 1) by incorporating attendees’ feedback while computing relative importance ofsentences, and 2) replacing the vanilla similarity function used in TextRank with a bag-of-wordsranking function called BM25, which is proven to work well in information extraction tasks [116].Each individual transcript is treated as a set of sentences ( s , s ... s n ). Each sentence is consideredas an independent node. We used a function to compute the similarity between these sentences toconstruct edges between them. The higher the similarity between the sentences, the more importantthe edge between them will be in the graph. The original TextRank algorithm considers the relationbetween two sentences based on the content (tokens) they share. This relationship is between twosentences S i , S j , for every w k common token, is given by equation 1. sim ( S i , S j ) = |{ w k | w k ∈ S i & w k ∈ S j }| log (| S j |) + log (| S i |) (1)We replaced the above similarity function by a BM25 ranking function utility defined by equation 2. sim ( S i , S j ) = n (cid:213) k = IDF ( w k ∈ S i ) f ( w k ∈ S i , S j ) . ( a + ) f ( w k ∈ S i , S j ) + a . ( − b + b . | P | µ DL ) (2)where a , b are function parameters ( a = . , b = . f ( w k , S j ) is w k ’s term frequency in S j , IDF is the inverse document frequency, and µ DL is the average length of the sentences in our collection.More importantly, since we timestamped and tagged our transcripts, we knew which instances werepotentially more important in terms of garnering attendees’ feedback. To incorporate those, weaugmented every edge weight w i , j by a factor of ϵ determined experimentally (set to 1 .
10 if eitherof the sentences being compared prompted feedback, or 0 .
90 otherwise). From the constructedgraph of sentences, we computed the final relative importance of every vertex (sentence), andselected the top-n sentences corresponding to about 30% of the total length of the transcript whichwere then presented, in their original order, as the summary of the meeting discussion.We performed a series of ablation tests to evaluate the robustness of our summarization ap-proach. The results of these tests are presented in full in the Appendix section. For three differentmeeting transcripts that we had generated using CommunityClick, we quantitatively evaluatedthe auto-generated summaries against human-annotated reference summaries using the widelyused ROUGE [89] metrics. Across different meeting transcripts, we observed similar ROUGE scores,indicating that we produced summaries of consistent quality of summaries across different meetings.Additionally, we evaluated our algorithmic approach on the popular AMI meeting corpus [33]. Ourresults were found to be comparable to the current state-of-the-art methods [121, 137] employedon the AMI dataset under unsupervised settings.
We developed CommunityClick’s interface as aweb application. It allows multi-faceted exploration and analysis of meeting data by providingcomponents including the title, filters, discussion topics, timeline, summary, text editor, and finallythe augmented meeting transcript segments (Fig. 4). The title contains the metadata about themeeting, including the meeting title, date, and location (Fig. 4(A)). The filters allow organizersto explore the transcript segments according to the selected feedback or tags of interest (Fig. 4(F,H)). These options are customizable, and organizers may customize the tags to suit their purposebefore the meeting. We chose to visually collapse transcript segments that are filtered as opposed
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:13
AHJ BC D FE GI
Fig. 4. A snapshot of CommunityClick’s interface. A) The title provides useful metadata about the meetingsuch as date and location. B) The main topics extracted from the meeting transcript. C) The timeline visualizesorganizers’ tags in chronological order. Each circle represents a tag. Clicking on a circle brings organizers tothe corresponding transcript segment. D) The interactive feedback-weighted summary. E) Transcript viewdisplays the transcript text alongside organizers’ assigned tag, the main topic, and aggregated attendees’feedback in that time interval for each segment. F) Filters for attendee’s feedback (based on what was providedon iClicker). G) The bar chart displays attendees’ feedback. H) Filters for organizer’s tags. I) Rich text editorfor organizers to author the report. J) Options to view or collapse the summary or the transcript view. to completely removing them from the view to communicate to the users that there are additionalconversations that transpired between the segments currently visible.In the topic and timeline component, we provide the list of most relevant topics and the timeline ofthe meeting discussion (Fig. 4(B, C)). The organizers can filter the transcript segments based on anytopic. The timeline displays the organizers’ tags using circles in a chronological manner, where eachcircle represents a tag, and the color corresponds to organizers’ tags (Fig. 4(C)). This provides theorganizers with a temporal distribution of tags that demonstrates how the conversation progressedduring the meeting. When a circle is selected, the transcript is scrolled to the corresponding segmentand highlights the background to distinguish it from other segments.The feedback-weighted extractive summary is presented in a textbox (Fig. 4(D)). Each of thesesentences is interactive, and upon selection, they navigate to the transcript segment it was extractedfrom. This can enable organizers to explore the transcript and get a better understanding of why thesentence was added to the summary. Below the summary, we added a rich text editor for authoringthe meeting report with rich formatting options (Fig. 4(I)). We also added options for attachingadditional files or images. Once the report is created, it can be printed in PDF format directly,without switching to other external printing applications.Finally, we present the augmented transcript divided into transcript segments (Fig. 4(E)). Thesegments are ordered chronologically. Each transcript segment contains the transcript text, asso-ciated organizer’s tag, the most relevant extracted topic, time of the segment, option to importthe summary of the selected transcript to the text editor, and aggregated attendees’ feedback in
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. the form of a bar chart. For easy tracking, we highlight the transcript text that are added to thesummary. Organizers can edit the segments to assign or change tags and topics. However, theydo not have control over attendees’ feedback to mitigate bias injection. To reduce clutter on thescreen, we added two additional filters to collapse the summary or augmented transcript (Fig. 4(J)).
To explore whether CommunityClick could be effectively deployed in a real-world town hall meeting,we performed a pilot study where we simulated a town hall with nine participants. We recruitedeight participant as meeting attendees and one participant as the organizer, who had previousexperience with organizing meetings. We refer to the attendees who participated in our pilot studyas PA We recruited all participants using word of mouth from a public university in the U.S. Forthe discussion topic, we selected two contentious issues regarding the university that were popularat the time of our pilot study. The topics included discussions around building a new commonroom for graduate students and the rise of racist activities across the campus. All participants weregraduate students (6 males and 2 females with a average age of 27.25). The goal of the pilot studywas to assess the system workflow for potential deployment and whether the attendees couldshare their feedback silently using iClickers without interrupting others. Furthermore, we usedthe augmented transcript from the meetings to enable the pilot study organizer we recruited toexplore the meeting discussions to identify potential interface issues.The meeting took 60 minutes which is similar to a traditional town hall. We collected 292items of feedback from attendees (avg 36.5 per attendee ± dilute the opinion values ” (PA-4). The organizer mentioned usingiClickers enabled him to focus more on the discussion and the interface allowed him to bettercapture attendees’ feedback. He recalled that some attendees were silent, but the bar charts showedfeedback from all eight participants, meaning they were participating silently. He also identifiedthe flow of the discussion, and important discussion points.This pilot study helped us to better understand and solidify operational procedures to performreal-world deployment of our system. Based on the feedback we received, we modified the systemand user interface. For instance, we added a spamming prevention technique [123] by calibratingthe system to capture one click from each attendees’ iClicker in a 30-second window to negate thepossibility of diluting the values of specific feedback options. Furthermore, we added an option tocollapse the summary or transcript to reduce interface clutter. Finally, we used written labels oniClickers to reduce attendees’ cognitive load in remembering the mapping of response options. We evaluated the application of iClickers as a real-time response system, particularly for the abilityof silent attendees to share feedback, and the efficacy of our approach in enabling organizers toexplore, capture, and incorporate attendees’ feedback to author more comprehensive reports. Tothat end, we conducted a field experiment to examine if the attendees could effectively use iClickersto voice their feedback. In addition, we followed up by conducting semi-structured interviews
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:15 with 8 expert organizers to evaluate if CommunityClick could enable them to capture attendees’feedback and generate more comprehensive reports.
We deployed CommunityClick at a town hall in a college town in the U.S. The meeting focused ona new set of proposals to improve the parking condition of the town. We reached out to the townofficials a month before the meeting took place. They allowed us to deploy our system, record thediscussion, and use the data. They also introduced us to the organizer who facilitated the meeting.We explained the procedure to them and discussed the tags to be used for both the organizer andattendees. The town hall took place on a Thursday evening and was open for all to attend.
There were 31 attendees and 1 organizer present in the meeting. Weprovided attendees’ iClickers with Agree, Disagree, Unsure, Important, and Confused tags to31 attendees. For the organizers, we provided them iClickers labeled with Main Issue, Concern,Supportive, New Idea, and Good Point tags as per our pre-meeting discussion.
At the beginning of the town hall, we provided a brief tutorial for five minutes onhow to use the iClickers to the meeting attendees. We also received consent from the attendees aboutrecording their discussions. The meeting began with an organizer presenting the meeting agendaand the new parking proposals to the meeting attendees. The attendees and the organizer usediClickers to tag the conversations throughout the meeting. After the presentation, the attendeesengaged in discussing the proposals. The meeting lasted for 76 minutes. At the end of the meeting,we provided post-study questionnaires to attendees that asked various questions, such as theirreasons behind attending the meeting, their usual experience during town halls, whether they couldshare their opinions by speaking up, and how did using iClickers compare to such experiences.They responded on a five-point Likert scale. We also asked them open-ended questions around theirexperience of working with iClickers, whether they faced any issues or challenges, and suggestionsto improve their experiences and our approach. The post-study questionnaire is provided as asupplementary material.
We were given permission to collect and use the data fromthe town hall by government officials. We collected 61 minutes of meeting audio for transcription.We also collected organizer’s tags and attendees’ feedback from the meeting. In total, we captured56 tags from the organizer. Out of 31 meeting attendees, 22 used the iClickers we provided toshare their feedback. Out of these 22 attendees, 20 of them filled up the post-study questionnaire.We report the statistics based only on these 20 attendees’ responses. We captured a total of 492attendees’ feedback with an average of 24.6 feedback items per attendee with a standard deviationof 6.44. This data was later used to populate CommunityClick’s interface for demonstrating itsvarious functionalities to meeting organizers, which we describe in 5.2. We also collected thepost-study questionnaire responses and entered them into spreadsheets for creating charts (Fig. 5)and statistics for analysis.
From the analysis of the attendees’ iClicker usage patterns, we found that theattendees’ used the tag
Agree the most 187 clicks (38%), followed by
Important with 103 clicks(21%),
Disagree with 93 clicks (19%),
Confused with 79 clicks (16%), and finally
Unsure with the leastamount of 30 clicks (6%) only. Initially, we were surprise to see the large gap between agreementand disagreement. However, upon closer inspection, we found that on several occasions, theattendees who were using iClickers was clicking
Agree when other vocal attendee were verballyexpressing their disagreement to a discussion topic. This behavior pattern indicates that the silentattendees used iClickers to provide their support for an ongoing argument alongside sharing their
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
Strongly DisagreeDisagreeNeutralAgreeStrongly Agree A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 A11 A12 A13 A14 A15 A16 A17 A18 A19 A20I can voice my thoughts– Speaking up I can voice my thoughts- iClicker0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%Q4: I can voice my thoughts - iClickerQ3: I can voice my thoughts - CurrentQ2: iClicker was easy to get used toQ1: Satisfied with public meetings AB Q3: I can voice my thoughts- Speaking upStrongly Agree Agree Neutral Disagree Strongly DisagreeI am satisfiedwith town halls
Fig. 5. The results from the field experiment. A) Attendees’ responses show that the majority of meetingattendees were not satisfied with the status quo of town halls but found iClickers easy to get used to. Italso displays the number of attendees who thought they could share their voices by speaking up or usingiClickers. B) A deeper comparison between speaking up and using iClickers to share attendees’ feedback.The diamonds ( ◆ ) and stars ( ✶ ) represent 20 attendees’ (A1-A20) responses to questions that asked them torate their experiences of sharing opinions by speaking up and using iClickers respectively during town halls.The arrows show the difference and increase or decrease in their ratings. The arrows demonstrate that themajority of the participants who were not satisfied with the current methods of sharing opinions (A1-A3,A5) during town halls found iClickers to be a better alternative. They also show the participants who werecomfortable with speaking up during meetings, did not endorse iClickers as strongly to share their voices. own opinions. We also found that the attendees did not press any iClicker options during theintroduction when the organizers were setting up the discussion and conclusion of the meetingwhen the organizers expressed gratitude for attending the meeting and other social conversation.This suggests that the attendees took their opinion sharing using iClickers seriously and did notrandomly clicked different options during the meeting.From the analysis of the post-study questionnaires, we found that all of the 20 attendees eitherlived or worked in the town of Amherst, where the town hall was organized. 95% of these attendees(19 responses) were well-accustomed with such meetings and they mentioned attending similartown halls twice a year, while 50% (10 responses) attended these meetings more than five times peryear. When asked about their exposure to technology, all meeting attendees responded to owning atleast one computer and one smartphone with an internet connection, which they were comfortableusing. However, their responses to the experiences in town halls varied as presented in Fig. 5(A).25% of attendees (5 responses) mentioned that they did not feel that they are able to voice theirthoughts in town halls. Only 35% (7 responses) were pleased with the way such town halls areorganized while 50% of the attendees (10 responses) were neutral in their responses. 75% attendees(15 responses) responded that they got used to iClickers quickly. 85% mentioned (17 responses) theywere able to share their thoughts using iClickers compared to only 65% (13 responses) attendeeswho are comfortable with speaking up to share opinions. The majority of the attendees (90%) werepositive about their experiences of using iClickers to share their opinions (18 responses). One J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:17 attendee (A9) mentioned, “
I feel like I could contribute more than usual and I would definitely like useit in future meetings. ”We further compared the attendees’ responses on their ability to share their thoughts in townhalls by voicing their opinions against using iClickers (Fig. 5(B)). The data shows that almost all ofthe attendees who did not think they could share their thoughts by speaking up, except for one(A4) thought they could voice their opinions using iClickers. One such attendee (A2) mentioned, “
Ididn’t like what others were saying. But instead of interrupting, I just used the clicker to say that Ididn’t agree with them. ” We also found that, while agreeing that iClickers’ could provide a way tovoice opinions in town halls, the attendees who strongly preferred speaking up did not rate theirexperience of using iClickers to share opinions as high. One of these attendees (A17) mentioned, “
Iwas distracted, so I didn’t use it that much. ”We identified two important insights from this field experiment (Fig. 5). Firstly, it demonstratedthat the silent attendees’ who could not speak their minds found a way to voice their opinionswithout apprehension of confrontation in town halls using iClickers (Fig. 5(A) and (B), attendeesA1-A3, A5). However, we also found that some attendees who strongly agreed that they weresatisfied with the current method of sharing opinions by speaking up, did not as strongly endorseiClickers as a way to share their opinions (Fig. 5(B), attendees A13-A15, A17, A18). We speculate tworeasons for such reduced ratings for iClickers. First, for the attendees who are already comfortablewith speaking up, iClickers might seem like an additional step to share opinions which mightlead to distractions, as mentioned by one of the attendees (A17). The second reason might be areluctance to deviate from the norm and use technology in an established albeit impaired townhall proceedings and customs. Nevertheless, our results suggest that the addition of iClickers couldbe an acceptable trade-off between providing the silent attendees a way to communicate theiropinions and mildly inconveniencing the adept and vocal meeting attendees.
We conducted semi-structured interviews with 8 expert meeting organizers, who were experiencedwith organizing or facilitating town halls to gather data on community’s needs, issues, and ideas.They were also adept at compiling meeting reports that play a pivotal role in informing civicdecision-making. Our objective was to examine if CommunityClick’s interactive interface couldhelp the organizers to better capture the attendees’ feedback to author more comprehensive reportsthat preserve the equity and inclusivity of voiced opinions in town halls.
We reached out to a total of 29 expert organizers from across the U.S. 8 of themresponded by agreeing to help us with evaluating CommunityClick. Our interviewees were expertsin their fields with intimate knowledge of town hall organization and decision-making. We refer toour semi-structured interview participants as P . On average, they had over 20 years of experience.One interviewee (P1) was the organizer from our field experiment—the town hall on parking. Wemade connections with the others (P3-P8) during our formative study. All of our interviewees werebased in the U.S. Several experts we engaged with to evaluate CommunityClick were excited aboutits potential and agreed to deploy the system in their then upcoming town halls. Our originalevaluation plan involved several deployments in the wild, then providing organizers with themeeting audio from these deployments and ask them to write two reports, one using their currentmethod of writing reports, and the other using CommunityClick’s augmented meeting transcriptand interactive interface. We wanted to study the differences between these reports to investigatethe efficacy of our system. However, the recent COVID-19 pandemic forced the organizers to cancel
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
Table 1. This table shows themes that emerged from analyzing the interviews with the organizers. The codesassociated with the themes and their description is also presented in the table.
Themes Codes Descriptions
Enabling inclusivity Equitable platform, problem speaking up, opinion sharing,understanding others, inclusive opinions CommunityClick’s impact on inclusiv-ity in town hallsDiverse perspectives Shared narrative, honest reflections, attendee’s reactions,identifying conversation flow Different perspectives and opinionshared in town hallsReport quality meeting summarization, missing information, credible pro-cess, comprehensiveness, accurate representation CommunityClick’s utility in creating re-portsMeeting organization Unstructured discussions, real-time attendee’s response,tracking response, customized tags, measuring consensus Organizing meeting-generated dataInterface learnability Intuitiveness, easy-to-use, formatting, data exploration Users’ ability to learn and use interfacefeaturesConcerns and caveats Technology as a barrier, tech-savvy, distraction factors,young generation Concerns regarding CommunityClick’susage in town hallsImprovement suggestions Real-time feedback, opinion statistics, organizers’ input Suggestions to improve our approach all town halls until further notice, and we were compelled to cut our evaluation short. Due to thissetback, we revised and modified our evaluation procedure as follows.We deployed the CommunityClick interface on a public domain and shared it with our inter-viewees via emails at least two weeks before our interviews. To maintain privacy of usage, weprovided each organizer with their own user account and login credentials. We also provideddetailed instructions on how to use CommunityClick’s various features and encouraged the inter-viewees to explore the interface at their own convenience. We populated the interface with thedata collected from the simulated meetings from our pilot study as well as the meeting from ourfield experiment for the interviewees to explore. During the interview sessions, we asked themopen-ended questions focusing around their current practices towards town hall organization, howusing CommunityClick differed from these practices, how useful could CommunityClick be tocapture silent attendees’ feedback and marginalized perspectives, could the interface allow themto author better reports, and finally, suggestions to improve CommunityClick. We also allowedthem to ask any questions they might have about CommunityClick. The interview questions areprovided as a supplementary material. We conducted the meetings over video conferencing viaZoom [12]. The interviews lasted between 45-60 minutes. All participation was voluntary. Eachinterview was conducted by an interviewer and a dedicated note-taker from our research team.
We transcribed over 400 minutes of audio recording fromour interviews with organizers. We also took extensive notes throughout the meeting. Finally, wethematically analyzed the interview transcripts and notes taken using the open-coding method [31].Two members of our research team independently coded the data at the sentence level using aspreadsheet application. The inter-coder reliability was 0.89 which was measured using Krippen-dorff’s alpha [84]. We had several iterations of discussions among the research team to condensethe codes into the themes presented in Table 1.
Our analysis of interview transcripts and notes surfaced critical insights on howCommunityClick could enable attendees to share opinions and help organizers to capture inclusivefeedback to author more comprehensive reports. We elaborate on these insights in the followingand discuss possible room for improvement.
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:19
CommunityClick can create a more inclusive platform to share opinions.
Our intervie-wees were unanimous (8 out of 8) in acknowledging CommunityClick’s potential to create aninclusive platform for community to share their opinions. They shared with us several examplescenarios they experienced where CommunityClick could have provided silent attendees a wayto speak their minds. P1 mentioned, “
People want to share their opinions, but sometimes they justcan’t express themselves because they’re not comfortable talking, or they’re nervous about how they’llappear to or who is around the table. Often they are intimidated. Here, "intimidated" is a strong word.But I don’t think it’s the wrong word. ” P2 mentioned,
There was an individual who attended several meetings, it was clear that their presencehad an impact on people’s willingness to speak at all, or the opposite effect, where peopleescalated in reaction to that person. Giving them the ability to click help both ways. Theycan avoid confrontation or avoid escalation by just clicking.
Similarly, P3 drew examples from his experiences, saying,
Even if the attendees are from the U.S., [people with] different upbringings or culturalbackgrounds have a disadvantage to those who are quite familiar with the moors of groupdynamics. In our town halls, we only take votes on questions or get agreements, but in aconversation, there are so many euphemisms, colloquialisms, and metaphors that makeit difficult for someone unfamiliar with them to understand others’ reactions. There isreal value in using options like “confused” and “unsure” to allow them to record that theydidn’t understand the conversation instead of forcing them to agree or disagree.
P6 found further value in separating the attendees’ tags and the organizers’ tags to establishorganizers’ credibility. She mentioned, “
The organizers cannot unintentionally skew the attendees’feedback because [their tags] are separate. That way, we know the recorded feedback is unbiased. ” The augmented transcripts provide evidence of attendees’ reflections.
One of our primarygoals was to enable organizers to have access to attendees’ perspectives to form a shared narrative ofthe meeting discussions. After exploring CommunityClick’s interface, the majority of interviewees(7 out of 8) mentioned how it enabled them to capture meeting attendees’ reflections on the meetingagenda. P6 mentioned, “
It provides a way of ensuring that voices and reactions are reflected as peoplespeak and click. It is a huge step towards having a more honest reflection of what really went on in themeeting. ” P3 further emphasized how CommunityClick not only captured the attendees’ feedbackbut also allowed navigation of the conversation flow using the Timeline,
This tool allows me to see both how many ideas have traction or agreement and how manydon’t, but just as importantly, how the flow went. The facilitators are concerned with theway topics are discussed in town halls. These topics are influenced by the surroundingconversations. It [Timeline] allows me to see reactions that might or might not be intendedbecause of the sequence of conversations. Having a way to track that has a huge value.
Regarding the interactive augmented transcript, P4 specifically preferred the way it enabledher to track attendees’ responses. She drew a comparison with her usual methods for note-takingduring town halls saying,
We usually have a table facilitator and then a table observer. The table observer takesdetailed notes, but it adds to the number of staff we have to have. So that creates anadditional challenge, but the speech to text transcription makes a big difference in recordingpeople’s reactions. With [CommunityClick], maybe we won’t need a table observer.
P5 also mentioned how CommunityClick gave credence to attendees’ reactions during the meetingdiscussions through the feedback bar charts. She said,
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
It makes a lot of sense to see where people are aligned, where the challenges are, andgiving information from their reactions. When changing policies, we hear from only a fewvoices who are either for or against something, and they tend to dominate the conversation.Having a visual and data-driven way to show what was the actual participation is gold.Sometimes people feel that a proposal is not aligned with their ideas. With the bar chart,you can show them that maybe you are the outlier and others agree with the proposal.
CommunityClick can help create more comprehensive and accurate reports.
All of ourinterviewees had prior experiences of writing reports by summarizing the meeting discussions andidentifying key discussion points. They found various aspects of CommunityClick useful to not onlyauthor more comprehensive reports but also more accurate ones that lend credibility to the reportcreation process. P1 drew parallels with his experience of working in scenarios where designatednote-takers took notes and his team generated the reports from those notes. He mentioned, “
Peoplewho take notes have varying abilities and the notes vary in quality. Instead, as you are writing reports,you have [CommunityClick], where you can see and add the reactions to what [attendees] discussedright away, it builds credibility for the process. ” P3 echoed similar sentiments, saying,
You are usually distracted by the conversation while taking notes, which means youmight miss every third word at a particular moment, which could be the differencebetween agreement and disagreement. Having it transcribed and summarized will reminda facilitator of some things that he or she may not have remembered or make it moreaccurate, because they may have remembered it differently. I love the fact that the [textanalysis methods] can capture that objectively for us.
P4 also emphasized the usefulness of importing a summary to the text editor. She mentioned,
Having the text editor where you can start writing the report and pull in pieces from thetranscript could be really helpful, because then as you read through the transcript andyou’re writing about some themes, you can pull characteristic quotes that would reallyhelp bring in more evidence for claims for those themes.
Furthermore, we found that the report creation process can take a few hours to a few days de-pending on variables such as the way notes were taken, the length of the meeting, report creators’skills, etc. P7 highlighted the reduced workload and efficiency that CommunityClick could pro-vide, saying, “
There is a physical component of getting into it, typing it up, theming, organizing, andediting which always takes longer than anticipated, I can see some of those issues can be fixed with this. ” CommunityClick preserves the flow of meeting discussions by establishing an implicitstructure.
The majority of our interviewees (6 out of 8) thought CommunityClick could be bestutilized to organize unstructured meeting discussions. They emphasized that contrary to askingmeeting attendees to respond to specific questions in town halls, CommunityClick allowed at-tendees to respond whenever they wanted, creating an implicit structure to the meeting whilepreserving the flow of discussion. One interviewee (P1) mentioned, “ [CommunityClick] wouldprovide the biggest benefit in more unstructured kind of discussions. If you have a town hall, wherepeople are less likely to speak up, [tags] would be helpful to understand their reactions and help with thetheming. ” Another interviewee (P5) mentioned, “
It’s hard to keep track of many ideas, but the visualorganization of information helps to gauge reactions and figuring out if we reached consensus. But mostimportantly, it helps me to see if there are any social or racial injustice components into the proposalswhere there can be negative reactions. ” They also found the option to customize the attendees’feedback and organizers’ tags useful to adapt to different meeting scenarios. One interviewee (P3)mentioned, “
Words may mean different things in different meetings. Having the ability to label and
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:21 customize [tags] individually would be a way for different organizations to adjust to that. Sometimeswe want to know [attendees’] hopes and concerns, but other times, we just want to know if they agreeor disagree. ” However, P7 raised a concern about larger meetings, saying, “
I think [CommunityClick]will be useful for smaller meetings, but if there are hundreds of people, and everyone is speaking overeach other, I’m not sure if you will be able to cope with that. ” The simplicity and learnability of the interface affords intuitive exploration.
From a us-ability standpoint, all of our interviewees (8 out of 8) found CommunityClick’s interface to be simpleand straightforward to work with. P3 extolled the interface saying, “
It’s very intuitive, simple, easyto use, and navigate after the fact, edit, and update. All user interface features look well-thought-outto me considering the inner workings are extremely delicate and complicated. ” P4 valued the richediting options of the text editor. She said, “
The automatic summaries can be used as a startingpoint of the report, as an initial cut, and then I can delete the things that might not be very usefuland build up the report by adding more to it and formatting it. I can clearly see a lot of thoughtwas put into designing the interface. ” P5 thought that the interactivity of the timeline was usefulfor navigating the augmented transcript. She mentioned, “
When I started clicking on the buttonson the circles at the top [timeline], it was very intuitive, like, it just automatically brings you to theplaces where that correlates with the statements, so you understand what it’s connected to. ” P6 furtheremphasized CommunityClick’s potential as a record-keeping and meeting exploration system,saying, “
Everything is linked together. So in that sense, it makes intuitive and logical sense when I’mlooking at the data. It will be a total game-changer for policymakers and community organizers. ” Concerns around technology in town halls.
Although our interviewees praised Communi-tyClick, some of them raised a few important concerns. P8 mentioned how technology usage couldbe troublesome in town halls, saying, “
It feels like the technology itself could be seen as a barrier,because a lot of people might not feel quite as comfortable, clicking on things and reacting to. ” On adifferent note, P4 raised concerns about the sense of urgency such technology might impose onthe meeting attendees. She said, “ [Attendees’] reactions, they are decisions that are being made inthe spur of the moment as they’re hearing information. And it’s such a complex, sociological, andpsychological response to information. ” Her concern was whether the urge to immediately respondto someone’s perspectives could inhibit the ability to contemplate and deliver a measured response.Further concerns arose from P5, who mentioned how younger meeting attendees might have anadvantage in the town halls if the technology is heavily used, saying, “
Younger generations tend touse technology so much more easily. And they turn to it so much more easily than older generations. ” Possible room for improvements.
We received some feedback from our interviewees on howto improve CommunityClick. Some of these suggestions focused on adding more real-time com-ponents to CommunityClick that can further augment the ongoing discussions. For example, P3mentioned, “
Right now, [CommunityClick] helps me to understand attendees’ reactions after the fact.I think if the facilitators could see them in real-time as the attendees are clicking away, they might beable to steer the conversation better to be more fruitful and fair. ” Other suggestions involved addingfunctionalities to add the organizers’ own topics on top of the automatically extracted topics. Inthat regard, P4 mentioned, “
I guess it depends on who is using the system, but we look to dive a littlebit more and would want to maybe customize the topics and themes ”. From our field experiment and interviews with experts, we found that CommunityClick could beused to create an equitable platform to share more inclusive feedback in town halls. Compared to
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. prior approaches to utilize technology in town halls [25, 103], CommunityClick enabled attendeesto utilize iClickers to silently and anonymously voice their opinions any time during the meetingwithout interrupting the discussion or the apprehension of being shut out by dominant personalities.We extended the use of audience response systems in prior works by adding five customizableoptions that go beyond binary agreement or disagreement and modifying iClickers as a real-timeresponse system [22, 25]. The experts we interviewed valued options such as confused , or unsure to identify if attendees were disengaged or did not understand the conversation without forcingthem to agree or disagree [76, 119]. The customizability of both organizers’ tags and attendees’feedback further added to the flexibility of our approach that could potentially increase adaptabilityin meetings with diverse agendas in both civic and other domains. Moreover, the automation andaugmentation of speech-to-text transcription, extraction of most relevant discussion topics, andfeedback-weighted summarization of meeting discussion could potentially eliminate the need forseparate note-takers. According to the organizers we interviewed, it could reduce the manpowerrequirement significantly compared to established approaches [90, 91].From organizers’ perspectives, CommunityClick could help create more comprehensive andaccurate reports that could provide evidence of attendees’ reflections. Prior work experimentedwith annotations during face-to-face meetings [74, 78] for memory recall. In our work, we em-powered organizers to go beyond recollection of events during meetings by integrating their owncustomized tags and enabled them to capture a more comprehensive picture of the meeting dis-cussion. Furthermore, our interactive summary and attendees’ feedback visualization provided avisual and data-driven way to highlight attendees’ viewpoints, and outliers on critical points ofdiscussions. This could enable organizers to receive a more clear reflection of what occurred in themeeting and author more accurate reports based on tangible evidence rather than incomprehensiveinterpretation [94], that could further lend credibility to the report creation process. Prior workhighlighted concerns about accuracy in computational approaches to analyze meeting data [97].However, from our interviews with experts, we found that the accuracy and comprehensiveness ofmeeting reports often depends on meeting length, method of taking notes during the meetings, andnote-takes’ skills. We posit that synchronization of meeting data and addition of inclusive attendees’feedback into the summary and interface will enable organizers to author more accurate reportswith the added benefit of reduced manpower requirement. Although during our interviews withmeeting organizers, some of them highlighted that the augmented transcripts, discussion topics,and summaries could only be accessed after the meeting is finished, we argue that the latency isan acceptable tradeoff for increased comprehensiveness, credibility, and accuracy in generatingreports. Furthermore, enabling access to the variety of meeting generated data could also helpto reduce both organizers’ and attendees’ distraction during the meeting [15, 61] and allow themto engage with the ongoing discussion. In particular, the separation of attendees’ feedback andorganizers’ tags along with the evidence of attendees’ feedback in meeting reports could pavethe way to instill trust between the decision-makers in the local government and the communitymembers, which is considered to be a wicked problem in the civic domain [39, 41, 63, 94].
Marginalization can be broadly defined as the exclusion of a population from mainstream social,economic, cultural, or political life [58], which still stands as a barrier to inclusive participation inthe civic domain [48, 94]. Researchers in HCI and CSCW have explored various communitysourcingapproaches to include marginalized populations in community activities, proceedings, and de-signs [48, 53, 81, 93, 132]. In this work, we added to this body of work by designing technology thatincluded silent voices in civic participation to increase the visibility of marginalized opinions. Ourfield experiment and interviews with experts demonstrated the efficacy of our approach to enable,
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:23 capture, and include silent attendees’ participation in town halls, regardless of their reasons behindkeeping silent (social dynamics, fear of confrontation, cultural background, etc.). Our work alsoanswers the call for more inclusive data analysis processes and practices to augment computationalapproaches [50, 114] by proposing a text summarization method that included and prioritizedattendees’ feedback when generating meeting discussion summaries. Such inclusive data analysistechniques could reflect the community’s opinions, identify social and injustice, and support ac-countability in the outcome of civic engagements [53]. However, designing communitysourcingtechnologies to include marginalized opinions and amplify participation alone may not be enoughto solve inequality of sharing opinions in the civic domain [26, 126]. Despite the success of previousworks [25, 53, 90], technology is rarely integrated with existing manual practices and follow-upsof engagements between government officials and community members are seldom propagatedto the community. This lack of communication might lead to the uncertainty of attendees onwhether actions will be taken to reflect their opinions. As a result, the power dynamics betweenthe government officials and the community remain unbalanced, especially for the marginalizedpopulations [48, 49, 53, 125]. One way to establish the practicality of using technology to includemarginalized opinions is to integrate them into existing processes to convince the officials of itsefficacy. Our work provides first steps towards integration of marginalized perspectives, how-ever, long-term studies are required to assess the possibility and feasibility of integrating publicperspectives into the actual civic decisions.
Our formative study with 66 attendees and field experiment with 20 attendees bore a strikingresemblance regarding the attendees’ ability to share their opinions in town halls. We found that17% (11 out of 66) of attendees from the formative study and 25% (5 out of 20) of attendees fromthe field experiment were not comfortable to speak up in town halls and needed a way to sharetheir opinions. However, similar to previous works [56, 90], some of the meeting organizers weinterviewed were apprehensive towards depending solely on technology due to concerns aroundlogistics involved with the procurement and maintenance of electronic devices such as iClickers,reliability concerns of using technology that require proper management, and unfair advantagetowards newer generation who are more receptive to novel technologies. We argue that renewedmotivation, careful design choices, and detailed instructions could help overcome the noveltybarrier of technology even for people from older generations [44, 129]. Based on our experiencesfrom this study, we advocate for integrating technology with current face-to-face methods. Wedo not suggest complete replacement of traditional methods with technology, rather we suggestaugmenting them with technology to address some of their limitations.CommunityClick could be gradually integrated as a fail-safe or an auxiliary method to com-plement the current process. All the functionality of CommunityClick would remain operationalwhile the organizers could take personal notes in parallel alongside their iClicker interactions.This would allow organizers to retain their current practices while taking advantage of augmentedmeeting transcripts, discussion topics, and summaries from CommunityClick to better captureand understand attendees’ perspectives when compiling reports. Another way to integrate Com-munityClick into current processes would be to provide statistics of attendees’ feedback so thatthe organizers could track the discussion dynamics and facilitate the conversation accordingly tokeep attendees engaged. However, prior works suggested that real-time visualization or displayscan add to attendees’ distraction, causing them to disengage from the ongoing discussion [15, 61].To circumvent this issue, optional companion mobile applications could be introduced only fororganizers to receive real-time feedback on attendees’ iClicker responses so that they can makecourse corrections accordingly without distracting the attendees or influencing their opinions.
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018.
From our field experiment and interviews with the experts, we demonstrated CommunityClick’spotential in elevating traditional town halls. We argue that our proposed data pipeline can beexpanded to other domains where opinion sharing is important for successful operation. Forexample, in education, whether in classroom settings or in massive open online courses [51, 73, 83],it could enable students to share their feedback anytime without interrupting the class on whetherthey could understand a certain concept, or if they were confused and needed more elaboration,especially when the class size is large. For educators, it could allow them to track the effectivenessof their content delivery, class progress, and student motivation, which might help them to adjustthe course curriculum more effectively and efficiently, instead of receiving feedback from studentsonce every semester. More importantly, familiarity with iClickers in education eliminates the entrypoint barrier for technology making the system readily adoptable [14, 134].Similarly, CommunityClick could be used as a possible alternative to expensive commercialapplications in meeting domains within the business and other corporate organizations [1, 2].Similar to town halls, in a corporate setting, CommunityClick could provide text summary anddirect evidence of attendees’ feedback from the meeting transcripts for better decision-making.Automatic text summarization remains a challenging problem [124], especially when it comes toautomatically summarize meeting discussions [57]. Our summarization approach emphasized theimportance of meeting attendees’ feedback instead of purely text-based summarization approachthat treated all sorts of text documents similarly [99, 124]. We argue that this feedback-weighted summary could be valuable in generating domain-specific contextual text summarization in othermeeting genres. Furthermore, there are potential applications of CommunityClick as a record-keeping tool which might be particularly applicable in journalism where the journalists can utilizethe iClickers to annotate the interview conversation with important tags and later review theaugmented interview conversation using CommunityClick’s interface to better organize and writenews articles or interview reports.
Our evaluations suggested the efficacy of CommunityClick in providing a voice to reticent partici-pants and enabling organizers to better capture the community’s perspectives. However, we hadonly one real-world deployment of CommunnityClick at a town hall due to the pandemic. We willcontinue to engage with meeting organizers and deploy CommunityClick in town halls to studythe long-term impact of CommunityClick and attempt to gradually integrate our approach in thepredominantly manual town hall ecosystem. Also, we found that iClickers might be distractingas a new technology for some attendees in town halls as the findings from our field experimentsuggested. Furthermore, the logistical issues associated with both hardware procurement andthe unavailability of software APIs might be a hindrance to some communities. To circumventthese issues, low-cost alternatives to iClickers or fully software-based substitute audience responsesystem applications could be utilized [10, 11]. A fully software based solution could also enableattendees to provide open-ended textual feedback which CommunityClick does not allow in itscurrent state. However, further comparative studies are required to assess the cost, efficacy, andapplicability of such alternatives to replace iClickers that would provide the same benefit withoutincurring additional financial, computational, or cognitive overhead.There are several avenues to improve CommunityClick in the future. For example, it could beaugmented with more real-time components including a companion application to deliver dynamicfeedback statistics for organizers to access and utilize during the meeting. We will also explorenovel methodologies to speed up the automatic speech to text transcription process to further
J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:25 reduce the time required for data processing. One approach could be to utilize parallel pipelineprocessing [34] where audio signals from the meeting will be processed concurrently, which mightreduce processing time significantly.To further provide evidence from attendees’ feedback to help organizers when authoring reports,the audio of discussions could be added and synchronized with the transcript segments. It couldenable the organizers to identify vocal cues and impressions from the attendees who spoke duringthe town hall to further contextualize the discussion segment [98]. In addition, we will investigatethe possibility of tracking individual iClickers IDs without risking the privacy of attendees toanonymously identify potentially contentious individuals who might be pushing specific agendasor marginalize minority viewpoints in a discussion. However, further studies are required tounderstand the potential computational challenges that might arise with such extensions.To further improve the report generation, we will explore novel technologies in natural languagegeneration [133] to automatically write meeting reports that could further reduce meeting orga-nizers’ workload. In addition to exporting the created reports, we will further enable exportingvarious statistics around attendees’ feedback, organizers’ tags, discussion topics, etc. to be added tothe report or used separately in presenting the outcome of the town halls to decision-makers.
In this study, we investigated the practices and issues around the inequality of opportunity inproviding feedback in town halls, especially for reticent participants. To inform our work, weattended several town halls and surveyed 66 attendees and 20 organizers. Based on our findings,we designed and developed CommunityClick, where we modified iClickers as a real-time responsemechanism to give voice to silent meeting attendees and reflect on their feedback by augment-ing automatically generated meeting transcripts with organizers’ tags and attendees’ feedback.We proposed a novel feedback-weighted text summarization method along with extracting themost relevant discussion topics to better capture community’s perspectives. We also designed aninteractive interface to enable multi-faceted exploration of the summary, main discussion topics,and augmented meeting-generated data to enable organizers to author more inclusive reports. Wedeployed CommunityClick in-the-wild to conduct a field experiment and interviewed 8 expertorganizers to evaluate our system. Our evaluation demonstrated CommunityClick’s efficacy in cre-ating a more inclusive communication channel to capture attendees’ opinions from town halls andprovide evidence of attendees’ feedback that could help organizers to author more comprehensiveand accurate reports to inform critical civic decision-making. We discussed how CommunityClickcould be integrated into the current town hall ecosystem and possibly expanded to other domains.
REFERENCES [1] 2015.
ICompassTech
MeetingKing . https://meetingking.com[3] 2017.
Voicea
Mobile Fact Sheet
Assembly AI iClicker
Slido
VoxVote
Zoom . https://zoom.us/[13] Brian Adams. 2004. Public meetings and the democratic process.
Public Administration Review
64, 1 (2004), 43–54.J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. [14] Stephen Addison, Adrienne Wright, and Rachel Milner. 2009. Using clickers to improve student engagement andperformance in an introductory biochemistry class.
Biochemistry and Molecular Biology Education
37, 2 (2009), 84–91.[15] Katy Appleton and Andrew Lovett. 2005. GIS-based visualisation of development proposals: reactions from planningand related professionals.
Computers, Environment and Urban Systems
29, 3 (2005), 321–339.[16] Mariam Asad, Christopher A Le Dantec, Becky Nielsen, and Kate Diedrick. 2017. Creating a Sociotechnical API:Designing City-Scale Community Engagement. In
Proceedings of the 2017 CHI Conference on Human Factors inComputing Systems . ACM, 2295–2306.[17] Robert Bajko and Deborah I Fels. 2016. Prevalence of Mobile Phone Interaction in Workplace Meetings. In
InternationalConference on HCI in Business, Government, and Organizations . Springer, 273–280.[18] Mark Baker, Jon Coaffee, and Graeme Sherriff. 2007. Achieving successful participation in the new UK spatial planningsystem.
Planning, Practice & Research
22, 1 (2007), 79–93.[19] Satanjeev Banerjee and Alexander I Rudnicky. 2006. Smartnotes: Implicit labeling of meeting data through usernote-taking and browsing. In
Proceedings of the 2006 Conference of the North American Chapter of the Associationfor Computational Linguistics on Human Language Technology: companion volume: demonstrations . Association forComputational Linguistics, 261–264.[20] William E Bennett, Stephen J Boies, Anthony R Davies, Karl-Friedrich Etzold, and Todd K Rodgers. 1991. Opticalstylus and passive digitizing tablet data input system. US Patent 5,051,736.[21] Tony Bergstrom and Karrie Karahalios. 2007. Conversation Clock: Visualizing audio patterns in co-located groups. In
System Sciences, 2007. HICSS 2007. 40th Annual Hawaii International Conference on . IEEE, 78–78.[22] Tony Bergstrom and Karrie Karahalios. 2009. Vote and be heard: Adding back-channel signals to social mirrors. In
IFIP Conference on Human-Computer Interaction . Springer, 546–559.[23] David M Blei and John D Lafferty. 2009. Visualizing topics with multi-word expressions. arXiv preprint arXiv:0907.1013 (2009).[24] Adrien Bougouin, Florian Boudin, and Béatrice Daille. 2013. Topicrank: Graph-based topic ranking for keyphraseextraction. In
International Joint Conference on Natural Language Processing (IJCNLP) . 543–551.[25] Shelley Boulianne, Kristjana Loptson, and David Kahane. 2018. Citizen Panels and Opinion Polls: Convergence andDivergence in Policy Preferences.
Journal of Public Deliberation
14, 1 (2018), 4.[26] Tony Bovaird. 2007. Beyond engagement and participation: User and community coproduction of public services.
Public administration review
67, 5 (2007), 846–860.[27] Daren C Brabham. 2009. Crowdsourcing the public participation process for planning projects.
Planning Theory
8, 3(2009), 242–262.[28] Daren C Brabham, Thomas W Sanchez, and Keith Bartholomew. 2010. Crowdsourcing public participation in transitplanning: preliminary results from the next stop design case. In
TRB 89th Annual Meeting Compendium .[29] Frank M Bryan. 2010.
Real democracy: The New England town meeting and how it works . University of Chicago Press.[30] John M Bryson, Kathryn S Quick, Carissa Schively Slotterback, and Barbara C Crosby. 2013. Designing publicparticipation processes.
Public administration review
73, 1 (2013), 23–34.[31] Philip Burnard. 1991. A method of analysing interview transcripts in qualitative research.
Nurse education today
11, 6(1991), 461–466.[32] Mark Button and Kevin Mattson. 1999. Deliberative democracy in practice: Challenges and prospects for civicdeliberation.
Polity
31, 4 (1999), 609–637.[33] Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, VasilisKaraiskos, Wessel Kraaij, Melissa Kronenthal, et al. 2005. The AMI meeting corpus: A pre-announcement. In
International workshop on machine learning for multimodal interaction . Springer, 28–39.[34] Ronnie Chaiken, Bob Jenkins, Per-Åke Larson, Bill Ramsey, Darren Shakib, Simon Weaver, and Jingren Zhou. 2008.SCOPE: easy and efficient parallel processing of massive data sets.
Proceedings of the VLDB Endowment
1, 2 (2008),1265–1276.[35] Patrick Chiu, John Boreczky, Andreas Girgensohn, and Don Kimber. 2001. LiteMinutes: an Internet-based system formultimedia meeting minutes. In
Proceedings of the 10th international conference on World Wide Web . ACM, 140–149.[36] Jason Chuang, Daniel Ramage, Christopher Manning, and Jeffrey Heer. 2012. Interpretation and trust: Designingmodel-driven visualizations for text analysis. In
Proceedings of the SIGCHI Conference on Human Factors in ComputingSystems . ACM, 443–452.[37] Stephen Coleman and John Gotze. 2001.
Bowling together: Online public engagement in policy deliberation . HansardSociety London.[38] Gregorio Convertino, Adam Westerski, Anna De Liddo, and Paloma D’iaz. 2015. Large-Scale Ideation & Deliberation:Tools and Studies in Organizations.
Journal Social Media for Organizations
2, 1 (2015), 1.[39] Eric Corbett and Christopher A. Le Dantec. 2018. Exploring Trust in Digital Civics. In
Proceedings of the 2018 DesigningInteractive Systems Conference (Hong Kong, China) (DIS âĂŹ18) . Association for Computing Machinery, New York,J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:27
NY, USA, 9âĂŞ20.[40] Eric Corbett and Christopher A Le Dantec. 2018. Going the Distance: Trust Work for Citizen Participation. In
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems . ACM, 312.[41] Eric Corbett and Christopher A Le Dantec. 2018. The Problem of Community Engagement: Disentangling the Practicesof Municipal Government. In
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems . ACM,574.[42] Dora L Costa and Matthew E Kahn. 2003. Civic engagement and community heterogeneity: An economist’s perspective.
Perspectives on politics
1, 1 (2003), 103–111.[43] Dora L Costa and Matthew E Kahn. 2003. Understanding the American decline in social capital, 1952–1998.
Kyklos
56, 1 (2003), 17–46.[44] Lisa A D’Ambrosio and Alea C Mehler. 2014. Three things policymakers should know about technology and olderadults.
Public Policy & Aging Report
24, 1 (2014), 10–13.[45] Rodrigo Davies. 2014. Civic crowdfunding: participatory communities, entrepreneurs and the political economy ofplace.
Entrepreneurs and the Political Economy of Place (May 9, 2014) (2014).[46] Richard C Davis, Jason A Brotherton, James A Landay, Morgan N Price, and Bill N Schilit. 1998. NotePals: LightweightNote Taking by the Group, for the Group.
University of California, Berkeley, Computer Science Division
Journal of educational psychology
63, 1 (1972),8.[48] Jessa Dickinson, Mark Díaz, Christopher A Le Dantec, and Sheena Erete. 2019. " The cavalry ain’t coming in tosave us" Supporting Capacities and Relationships through Civic Tech.
Proceedings of the ACM on Human-ComputerInteraction
3, CSCW (2019), 1–21.[49] Jessa Dickinson, Sheena Erete, Mark Diaz, and Denise Linn Riedl. 2018. Inclusion of Underserved Residents in CityTechnology Planning. In
Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems . 1–6.[50] Catherine D’Ignazio and Lauren F Klein. 2020.
Data feminism . MIT Press.[51] Sidney D’mello and Arthur Graesser. 2007. Mind and body: Dialogue and posture for affect detection in learningenvironments.
Frontiers in Artificial Intelligence and Applications
158 (2007), 161.[52] Amir Ehsaei, Thomas Sweet, Raphael Garcia, Laura Adleman, and Jean M Walsh. 2015. Successful Public OutreachPrograms for Green Infrastructure Projects. In
International Low Impact Development Conference 2015: LID: It Works inAll Climates and Soils . 74–92.[53] Sheena Erete and Jennifer O Burrell. 2017. Empowered participation: How citizens use technology in local governance.In
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems . 2307–2319.[54] Archon Fung. 2006. Varieties of participation in complex governance.
Public administration review
66 (2006), 66–75.[55] Nikhil Garg, Benoit Favre, Korbinian Reidhammer, and Dilek Hakkani-Tür. 2009. Clusterrank: a graph based methodfor meeting summarization. In
Tenth Annual Conference of the International Speech Communication Association .[56] John Gastil. 2008.
Political communication and deliberation . Sage.[57] Dan Gillick, Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-Tur. 2009. A global optimization frameworkfor meeting summarization. In . IEEE,4769–4772.[58] Lisa M Given. 2008.
The Sage encyclopedia of qualitative research methods . Sage publications.[59] Leo A Goodman. 1961. Snowball sampling.
The annals of mathematical statistics (1961), 148–170.[60] Eric Gordon and Paul Mihailidis. 2016.
Civic Media: Technology, Design, Practice . MIT Press.[61] Eric Gordon, Steven Schirra, and Justin Hollander. 2011. Immersive planning: a conceptual model for designing publicparticipation with new technologies.
Environment and Planning B: Planning and Design
38, 3 (2011), 505–519.[62] Kenneth L. Hacker and Jan Van Dijk. 2001.
Digital Democracy: Issues of Theory and Practice . Sage Publications, Inc.,USA.[63] Mike Harding, Bran Knowles, Nigel Davies, and Mark Rouncefield. 2015. HCI, Civic Engagement &
Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15) . ACM, New York, NY, USA, 2833–2842.[64] Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. 2012. CommunitySourcing: engaginglocal crowds to perform expert work via physical kiosks. In
Proceedings of the SIGCHI Conference on Human Factors inComputing Systems . 1539–1548.[65] Clyde Freeman Herreid. 2006. " Clicker" Cases.
Journal of College Science Teaching
36, 2 (2006), 43.[66] Julia Hirschberg and Christopher D Manning. 2015. Advances in natural language processing.
Science
Proceedings of the SIGCHI Conference on Human Factors inComputing Systems . ACM, 3305–3314. J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. [68] Jane Im, Amy X Zhang, Christopher J Schilling, and David Karger. 2018. Deliberation and Resolution on Wikipedia: ACase Study of Requests for Comments.
Proceedings of the ACM on Human-Computer Interaction
2, CSCW (2018), 74.[69] Judith E Innes and David E Booher. 1999. Consensus building and complex adaptive systems: A framework forevaluating collaborative planning.
Journal of the American planning association
65, 4 (1999), 412–423.[70] Judith E Innes and David E Booher. 2004. Reframing public participation: strategies for the 21st century.
Planningtheory & practice
5, 4 (2004), 419–436.[71] Renee A Irvin and John Stansbury. 2004. Citizen participation in decision making: is it worth the effort?
Publicadministration review
64, 1 (2004), 55–65.[72] James Derek Jacoby. 2010.
Collaborative and automatic annotations to improve the utility of recorded meetings . Ph.D.Dissertation.[73] Sara J Jones, Jason Crandall, Jane S Vogler, and Daniel H Robinson. 2013. Classroom response systems facilitatestudent accountability, readiness, and learning.
Journal of Educational Computing Research
49, 2 (2013), 155–171.[74] Vaiva Kalnikait˙e, Patrick Ehlen, and Steve Whittaker. 2012. Markup as you talk: establishing effective memory cueswhile still contributing to a meeting. In
Proceedings of the ACM 2012 conference on Computer Supported CooperativeWork . ACM, 349–358.[75] Matthew Kam, Jingtao Wang, Alastair Iles, Eric Tse, Jane Chiu, Daniel Glaser, Orna Tarshish, and John Canny. 2005.Livenotes: a system for cooperative and augmented note-taking in lectures. In
Proceedings of the SIGCHI conference onHuman factors in computing systems . ACM, 531–540.[76] Christopher F Karpowitz and Jane Mansbridge. 2005. Disagreement and consensus: The need for dynamic updatingin public deliberation.
Journal of Public deliberation
1, 1 (2005).[77] Robin H Kay and Ann LeSage. 2009. Examining the benefits and challenges of using audience response systems: Areview of the literature.
Computers & Education
53, 3 (2009), 819–827.[78] Shreeharsh Kelkar, Ajita John, and Doree Duncan Seligmann. 2010. Some observations on the live collaborativetagging of audio conferences in the enterprise. In
Proceedings of the SIGCHI Conference on Human Factors in ComputingSystems . ACM, 995–998.[79] Christopher Keller, N Finkelstein, K Perkins, S Pollock, C Turpen, and M Dubson. 2007. Research-based Practices ForEffective Clicker Use. In
AIP Conference Proceedings , Vol. 951. AIP, 128–131.[80] Catherine Keske and Steve Smutko. 2010. Consulting communities: using audience response system (ARS) technologyto assess community preferences for sustainable recreation and tourism development.
Journal of Sustainable Tourism
18, 8 (2010), 951–970.[81] Nam Wook Kim, Jonghyuk Jung, Eun-Young Ko, Songyi Han, Chang Won Lee, Juho Kim, and Jihee Kim. 2016.Budgetmap: Engaging taxpayers in the issue-driven classification of a government budget. In
Proceedings of the 19thACM Conference on Computer-Supported Cooperative Work & Social Computing . 1028–1039.[82] Soomin Kim, Jinsu Eun, Changhoon Oh, Bongwon Suh, and Joonhwan Lee. 2020. Bot in the Bunch: Facilitating GroupChat Discussion by Improving Efficiency and Participation with a Chatbot. In
Proceedings of the 2020 CHI Conferenceon Human Factors in Computing Systems . 1–13.[83] René F Kizilcec, Chris Piech, and Emily Schneider. 2013. Deconstructing disengagement: analyzing learner subpopu-lations in massive open online courses. In
Proceedings of the third international conference on learning analytics andknowledge . 170–179.[84] Klaus Krippendorff. 2011. Computing Krippendorff’s alpha-reliability. (2011).[85] Kostadin Kushlev, Jason Proulx, and Elizabeth W Dunn. 2016. " Silence Your Phones" Smartphone NotificationsIncrease Inattention and Hyperactivity Symptoms. In
Proceedings of the 2016 CHI conference on human factors incomputing systems . 1011–1020.[86] Sung-Chul Lee, Jaeyoon Song, Eun-Young Ko, Seongho Park, Jihee Kim, and Juho Kim. 2020. SolutionChat: Real-timeModerator Support for Chat-based Structured Discussion. In
Proceedings of the 2020 CHI Conference on Human Factorsin Computing Systems . 1–12.[87] Peter Levine, Archon Fung, and John Gastil. 2005. Future directions for public deliberation.
Journal of PublicDeliberation
1, 1 (2005).[88] Rensis Likert. 1932. A technique for the measurement of attitudes.
Archives of psychology (1932).[89] Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In
Text Summarization Branches Out
National Civic Review
91, 4 (2002), 351–366.[92] Narges Mahyar, Kelly J Burke, Jialiang Ernest Xiang, Siyi Cathy Meng, Kellogg S Booth, Cynthia L Girling, andRonald W Kellett. 2016. UD Co-Spaces: A Table-Centred Multi-Display Environment for Public Engagement in UrbanDesign Charrettes. In
Proceedings of the 2016 ACM on Interactive Surfaces and Spaces . ACM, 109–118.J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:29 [93] Narges Mahyar, Michael R James, Michelle M Ng, Reginald A Wu, and Steven P Dow. 2018. CommunityCrit: Invitingthe Public to Improve and Evaluate Urban Design Ideas through Micro-Activities. (2018).[94] Narges Mahyar, Diana V Nguyen, Maggie Chan, Jiayi Zheng, and Steven Dow. 2019. The Civic Data Deluge:Understanding the Challenges of Analyzing Large-Scale Community Input. (2019), 11 pages pages. ACM DesigningInteractive Systems (DIS) (to appear).[95] Jane Mansbridge, Janette Hartz-Karp, Matthew Amengual, and John Gastil. 2006. Norms of deliberation: An inductivestudy. (2006).[96] Jennifer Manuel, Geoff Vigar, Tom Bartindale, and Rob Comber. 2017. Participatory Media: Creating Spaces forStorytelling in Neighbourhood Planning. In
Proceedings of the 2017 CHI Conference on Human Factors in ComputingSystems . ACM, 1688–1701.[97] Moira McGregor and John C Tang. 2017. More to Meetings: Challenges in Using Speech-Based Technology to SupportMeetings.. In
CSCW . 2208–2220.[98] Albert Mehrabian. 2017.
Nonverbal communication . Routledge.[99] Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In
Proceedings of the 2004 conference onempirical methods in natural language processing . 404–411.[100] Aditi Misra, Aaron Gooze, Kari Watkins, Mariam Asad, and Christopher A Le Dantec. 2014. Crowdsourcing and itsapplication to transportation data collection and management.
Transportation Research Record
Teaching sociology
38, 1 (2010), 18–27.[102] Judith Morse, Margaret Ruggieri, and Karen Whelan-Berry. 2010. Clicking Our Way to Class Discussion.
AmericanJournal of Business Education
3, 3 (2010), 99–108.[103] Mary Ann Murphy. 2009. Promotion of Engaged Democracy and Community Partnerships Through AudienceResponse System Technology. (2009).[104] Gabriel Murray, Steve Renals, Jean Carletta, and Johanna Moore. 2006. Incorporating speaker and discourse featuresinto speech summarization. In
Proceedings of the main conference on Human Language Technology Conference of theNorth American Chapter of the Association of Computational Linguistics . Association for Computational Linguistics,367–374.[105] Mukesh Nathan, Mercan Topkara, Jennifer Lai, Shimei Pan, Steven Wood, Jeff Boston, and Loren Terveen. 2012. Incase you missed it: benefits of attendee-shared annotations for non-attendees of remote meetings. In
Proceedings ofthe ACM 2012 conference on Computer Supported Cooperative Work . ACM, 339–348.[106] Rand B Nickerson. 1993. Real-time wireless audience response system. US Patent 5,226,177.[107] Patrick Olivier and Peter Wright. 2015. Digital Civics: Taking a Local Turn. interactions
22, 4 (June 2015), 61–63.[108] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999.
The pagerank citation ranking: Bringingorder to the web.
Technical Report. Stanford InfoLab.[109] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a Method for Automatic Evaluationof Machine Translation. In
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics .Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, 311–318. https://doi.org/10.3115/1073083.1073135[110] Annie Piolat, Thierry Olive, and Ronald T Kellogg. 2005. Cognitive effort during note taking.
Applied cognitivepsychology
19, 3 (2005), 291–312.[111] Robert D Putnam et al. 2000.
Bowling alone: The collapse and revival of American community . Simon and schuster.[112] Lawrence R Rabiner, Biing-Hwang Juang, and Janet C Rutledge. 1993.
Fundamentals of speech recognition . Vol. 14.PTR Prentice Hall Englewood Cliffs.[113] Wendy M Rahn and John E Transue. 1998. Social trust and value change: The decline of social capital in Americanyouth, 1976–1995.
Political Psychology
19, 3 (1998), 545–565.[114] Lisa Marie Rhody. 2016. Why I dig: Feminist approaches to text analysis.
Debates in the digital humanities (2016).[115] Evan F Risko, Tom Foulsham, Shane Dawson, and Alan Kingstone. 2013. The collaborative lecture annotation system(CLAS): A new TOOL for distributed learning.
IEEE Transactions on Learning Technologies
6, 1 (2013), 4–13.[116] Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi atTREC-3.
Nist Special Publication Sp
109 (1995), 109.[117] Gene Rowe and Lynn J Frewer. 2005. A typology of public engagement mechanisms.
Science, Technology, & HumanValues
30, 2 (2005), 251–290.[118] Mariana Salgado and Michail Galanakis. 2014. "... so what?" limitations of participatory design on decision-making inurban planning. In
Proceedings of the 13th Participatory Design Conference: Short Papers, Industry Cases, WorkshopDescriptions, Doctoral Consortium papers, and Keynote abstracts-Volume 2 . 5–8.[119] Lynn M Sanders. 1997. Against deliberation.
Political theory
25, 3 (1997), 347–376.J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. [120] Ethan Seltzer and Dillon Mahmoudi. 2013. Citizen participation, open innovation, and crowdsourcing: Challengesand opportunities for planning.
Journal of Planning Literature
28, 1 (2013), 3–18.[121] Guokan Shang, Wensi Ding, Zekun Zhang, Antoine Jean-Pierre Tixier, Polykarpos Meladianos, Michalis Vazirgiannis,and Jean-Pierre Lorré. 2018. Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compressionand Budgeted Submodular Maximization. arXiv preprint arXiv:1805.05271 (2018).[122] Yang Shi, Yang Wang, and John Chen. 2017. IdeaWall : Improving Creative Collaboration through CombinatorialVisual Stimuli.
Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing- CSCW ’17 (2017), 594–603.[123] Huan Sun, Alex Morales, and Xifeng Yan. 2013. Synthetic review spamming and defense. In
Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining . ACM, 1088–1096.[124] Oguzhan Tas and Farzad Kiyani. 2007. A SURVEY AUTOMATIC TEXT SUMMARIZATION.
PressAcademia Procedia
5, 1 (2007), 205–213.[125] Nick Taylor, Justin Marshall, Alicia Blum-Ross, John Mills, Jon Rogers, Paul Egglestone, David M Frohlich, PeterWright, and Patrick Olivier. 2012. Empowering communities with situated voting devices. In
Proceedings of the SIGCHIConference on Human Factors in Computing Systems . 1361–1370.[126] Lars Hasselblad Torres. 2007. Citizen sourcing in the public interest.
Knowledge Management for Development Journal
3, 1 (2007), 134–145.[127] Karen Tracy and Margaret Durfy. 2007. Speaking out in public: Citizen participation in contentious school boardmeetings.
Discourse & Communication
1, 2 (2007), 223–249.[128] Gokhan Tur, Andreas Stolcke, Lynn Voss, Stanley Peters, Dilek Hakkani-Tur, John Dowding, Benoit Favre, RaquelFernández, Matthew Frampton, Mike Frandsen, et al. 2010. The CALO meeting assistant system.
IEEE Transactionson Audio, Speech, and Language Processing
18, 6 (2010), 1601–1611.[129] Eleftheria Vaportzis, Maria Giatsi Clausen, and Alan J Gow. 2017. Older adults perceptions of technology and barriersto interacting with tablet computers: a focus group study.
Frontiers in psychology
Proceedings of the 2016 CHI ConferenceExtended Abstracts on Human Factors in Computing Systems (San Jose, California, USA) (CHI EA ’16) . ACM, New York,NY, USA, 1096–1099.[131] Eric von Hippel. 2005. Democratizing innovation: Users take center stage.[132] Greg Walsh and Eric Wronsky. 2019. AI+ Co-Design: Developing a Novel Computer-supported Approach to InclusiveDesign. In
Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing .408–412.[133] Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Semanticallyconditioned lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745 (2015).[134] Christopher Whitehead and Lydia Ray. 2010. Using the iclicker classroom response system to enhance studentinvolvement and learning.
Journal of Education, Informatics and Cybernetics
2, 1 (2010), 18–23.[135] Amy X Zhang and Justin Cranshaw. 2018. Making sense of group chat through collaborative tagging and summariza-tion.
Proceedings of the ACM on Human-Computer Interaction
2, CSCW (2018), 196.[136] Amy X Zhang, Bryan Culbertson, and Praveen Paritosh. 2017. Characterizing online discussion using coarse discoursesequences. In
Proceedings of the Eleventh International Conference on Web and Social Media. AAAI Press .[137] Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2019. Searching for Effective NeuralExtractive Summarization: What Works and What’s Next. arXiv preprint arXiv:1907.03491 (2019).J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. ommunityClick: Capturing and Reporting Community Feedback from Town Halls to Improve Inclusivity 111:31
A APPENDIXA.1 Summary Evaluation
To summarize augmented transcripts, we wanted a robust unsupervised summarization algorithm.To that end, we improved TextRank [99] to create an attendee-feedback-weighted summarizationalgorithm. To test the effectiveness of this extractive summarization algorithm, we took threedifferent transcripts obtained from the following meetings :(1) A meeting discussion about creating a common room for graduate students in the department.(2) A meeting organized by public school officials discussing challenges to building renovations.(3) A meeting held on a college campus discussing the rise in racist activities.We evaluated these transcriptions with 4 different human annotators, who were specificallytasked with identifying sentences that best summarized the discussion transcripts. The annotatorswere graduate students working in the area of natural language processing. They were familiar withthe literature and practices of text summarization tasks. We treated these human-annotated sum-maries as references and against each of those reference summaries, we evaluated a ROUGE [89]score metric for the auto-generated summary. ROUGE is a widely used metric for evaluatingcomputer-generated summaries. It measures how much of the reference (human-generated) sum-mary is our algorithm “capturing”. We ideally wanted to strike a good balance between conciseness(precision) and the amount of information captured (recall) by our auto-generated summary andtherefore, we computed the ROUGE F1-scores which in turn measure both, precision and recall. Weevaluated ROUGE-1 F1 and ROUGE-2 F1 scores where ROUGE-N denotes N-gram comparisons ofauto-generated and reference summaries. For example, ROUGE-1 refers to the overlap of unigramsbetween our auto-generated summary and the reference summary. Similarly, ROUGE-2 woulddenote the bigram comparisons. Upon evaluating our auto-generated summaries for the threeselected transcripts against their respective reference summaries, we noticed nearly-consistencyROUGE scores. Our full results are summarized in Table 2. These results suggest that our algorithmproduced summaries of consistent quality across different meeting transcripts. Table 2.
ROUGE-1 and
ROUGE-2
F1 scores of generated summaries ( T n ) against reference summaries producedby human annotators ( A n ). Transcriptions Rouge-1 F1 Rouge-2 F1 A A A A A A A A T (Common room discussion) 51.29 45.83 48.12 42.41 34.56 33.23 24.64 30.98 T (School building renovations) 48.23 12.34 44.43 49.19 28.21 31.39 29.79 23.98 T (Activities on college campus) 68.21 70.61 52.28 55.09 33.94 35.90 27.29 21.93Our summarization method allowed us to generate short unsupervised summaries of meetingtranscripts by modeling those transcripts as a graph with sentences being the nodes. A similarityfunction is used to build edges between those nodes. BM25 [116] is a ranking utility functionwidely used to measure similarity in state-of-the-art methods for information extraction tasks.To measure the effectiveness of using BM25 as a similarity measure in our case and to furtherassess the robustness of our summarization system on a widely used standardized dataset, weevaluated the effect of changing the similarity metric used in TextRank through some ablationstudies. We carried out these experiments on the standard AMI Meeting Corpus , which is a widely All of which are presently analyzed in the CommunityClick interface. http://groups.inf.ed.ac.uk/ami/corpus/ J. ACM, Vol. 37, No. 4, Article 111. Publication date: August 2018. used meeting-summarization benchmark. We used the traditional test set of 20 meetings, eachassociated with a human-annotated summary of approximately 290 tokens. We compared thevanilla similarity metric [99] against the BM25 similarity metric and evaluated the ROUGE scoresfor the AMI test-set reference summaries. Results for these experiments are summarized in Table 3.We observe that our approach resulted in a 3-point gain in recall (amount of information captured)and a 2-point gain in precision over the vanilla similarity metric used in the TextRank algorithmon the AMI-meeting corpus. Table 3. Macro averaged results for different similarity functions within TextRank.
Similarity Function ( f ) AMI ROUGE-1P R F-1 TextRank (with BM25)
TextRank (with cosine-similarity) 41.21 32.66 36.54TextRank (vanilla) 40.11 33.48 36.27As we evaluated the performance on a domain-specific dataset (AMI meeting corpus), we arguethat these gains are consistent across our data as well. Furthermore, we found that our results werein-line with recent state-of-the-art methods [121, 137] on unsupervised extractive summarization.
A.2 Automatic Speech Recognition Evaluation
To assess how accurately our Automated Speech Recognition captures the meeting audio to generatemeeting transcripts, we carried out a tests to evaluate the Bilingual Evaluation Understudy Score(BLEU) [109] on the same transcripts described in the previous section. A BLEU score is a metric forevaluating a generated sentence to a reference sentence and is often used in machine translationtasks. We adopted it to evaluate our transcripts by counting matching n-grams in the generatedtext to matching n-grams in the reference text. A score 1.0 indicates a perfect match and a lowestscore of 0.0 would indicate a complete mismatch. Our reference transcripts were generated by asingle human annotator who manually transcribed the meeting audio. We consistently observedhigh BLEU scores, indicating that tokens appearing in the reference transcript are also presentin the generated transcript, thereby implying that our ASR system was indeed reliable. Our fullresults are summarized in Table 4.
Table 4. Evaluation of generated transcripts against reference transcripts that were manually generated byhumans. BLEU-n scores here effectively indicate n-gram overlaps between sentences from reference transcriptand generated transcript. T (16 mins) T (55 mins) T (75 mins)BLEU - 1 BLEU - 2
BLEU - 3
BLEU - 40.7812 0.7910 0.7604