Denae Ford
North Carolina State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Denae Ford.
foundations of software engineering | 2016
Denae Ford; Justin Smith; Philip J. Guo; Chris Parnin
It is no secret that females engage less in programming fields than males. However, in online communities, such as Stack Overflow, this gender gap is even more extreme: only 5.8% of contributors are female. In this paper, we use a mixed-methods approach to identify contribution barriers females face in online communities. Through 22 semi-structured interviews with a spectrum of female users ranging from non-contributors to a top 100 ranked user of all time, we identified 14 barriers preventing them from contributing to Stack Overflow. We then conducted a survey with 1470 female and male developers to confirm which barriers are gender related or general problems for everyone. Females ranked five barriers significantly higher than males. A few of these include doubts in the level of expertise needed to contribute, feeling overwhelmed when competing with a large number of users, and limited awareness of site features. Still, there were other barriers that equally impacted all Stack Overflow users or affected particular groups, such as industry programmers. Finally, we describe several implications that may encourage increased participation in the Stack Overflow community across genders and other demographics.
international conference on software engineering | 2015
Denae Ford; Chris Parnin
When learning to program, frustrating experiences contribute to negative learning outcomes and poor retention in the field. Defining a common framework that explains why these experiences occur can lead to better interventions and learning mechanisms. To begin constructing such a framework, we asked 45 software developers about the severity of their frustration and to recall their most recent frustrating programming experience. As a result, 67% considered their frustration to be severe. Further, we distilled the reported experiences into 11 categories, which include issues with mapping behaviors to code and broken programming tools. Finally, we discuss future directions for defining our framework and designing future interventions.
symposium on visual languages and human-centric computing | 2017
Denae Ford; Alisse Harkins; Chris Parnin
Stack Overflow is a learning community for software developers to share and solve programming problems with each other. However, women are often deterred from contributing questions or answers. Research external to programming communities suggest the presence of peers can increase activity from underrepresented users in unfamiliar spaces. To investigate the concept of peer parity, we studied how women participate on Stack Overflow and if the presence of more women on a thread enhanced their activity. We found that women who encountered other women were more likely to engage sooner than those who did not. We discuss how these findings can support women in programming communities through peer mentorship and increase engagement.
international conference on software engineering | 2018
Mahnaz Behroozi; Alison Lui; Ian Moore; Denae Ford; Chris Parnin
Problem-solving on a whiteboard is a popular technical interview technique used in industry. However, several critics have raised concerns that whiteboard interviews can cause excessive stress and cognitive load on candidates, ultimately reinforcing bias in hiring practices. Unfortunately, many sensors used for measuring cognitive state are not robust to movement. In this paper, we describe an approach where we use a head-mounted eye-tracker and computer vision algorithms to collect robust metrics of cognitive state. To demonstrate the feasibility of the approach, we study two proposed interview settings: on the whiteboard and on paper with 11 participants. Our preliminary results suggest that the whiteboard setting pressures candidates into keeping shorter attention lengths and experiencing higher levels of cognitive load compared to solving the same problems on paper. For instance, we observed 60ms shorter fixation durations and 3x more regressions when solving problems on the whiteboard. Finally, we describe a vision for creating a more inclusive technical interview process through future studies of interventions that lower cognitive load and stress.
international conference on software engineering | 2017
Denae Ford; Titus Barik; Leslie Rand-Pickett; Chris Parnin
Software engineer job candidates are not succeeding at technical interviews. Although candidates are able to answer technical questions, there is a mismatch of what candidates think interviewers assess versus what criteria is used in practice. This mismatch in expectations can cost candidates a job opportunity. To determine what criteria interviewers value, we conducted mock technical interviews with software engineer candidates at a university and collected evaluations from interviewers. We analyzed 70 interview evaluations from 9 software companies. Using a grounded theory approach, we compared interviewer interpretations of criteria including: performing a problem solving walkthrough, applying previous experience to problem solving, and the ability to engaging in conversation beyond writing code. From these findings, we provide implications on what candidates can expect to be evaluated on during technical interviews across companies, which can sometimes vary significantly.
foundations of software engineering | 2016
Brittany Johnson; Rahul Pandita; Justin Smith; Denae Ford; Sarah Elder; Emerson R. Murphy-Hill; Sarah Heckman; Caitlin Sadowski
Program analysis tools use notifications to communicate with developers, but previous research suggests that developers encounter challenges that impede this communication. This paper describes a qualitative study that identifies 10 kinds of challenges that cause notifications to miscommunicate with developers. Our resulting notification communication theory reveals that many challenges span multiple tools and multiple levels of developer experience. Our results suggest that, for example, future tools that model developer experience could improve communication and help developers build more accurate mental models.
human factors in computing systems | 2018
Denae Ford; Kristina Lustig; Jeremy Banks; Chris Parnin
Online question-and-answer (Q&A) communities like Stack Overflow have norms that are not obvious to novice users. Novices create and post programming questions without feedback, and the community enforces site norms through public downvoting and commenting. This can leave novices discouraged from further participation. We deployed a month long, just-in-time mentorship program to Stack Overflow in which we redirected novices in the process of asking a question to an on-site Help Room. There, novices received feedback on their question drafts from experienced Stack Overflow mentors. We present examples and discussion of various question improvements including: question context, code formatting, and wording that adheres to on-site cultural norms. We find that mentored questions are substantially improved over non-mentored questions, with average scores increasing by 50%. We provide design implications that challenge how socio-technical communities onboard novices across domains.
symposium on visual languages and human-centric computing | 2017
Denae Ford
Despite the efforts to get underrepresented groups involved in online computing communities, few have joined. When these developers are asked why they chose not to participate, they often describe the Stack Overflow community as a place they do not feel welcomed in. In my work, I will study how to make programmers feel more welcomed and decide to participate in these communities using qualitative and quantitative analysis. My thesis proposes using experienced peers as a way to engage lurking users to become active participants in programming communities.
empirical software engineering and measurement | 2017
Denae Ford; Thomas Zimmermann; Christian Bird; Nachiappan Nagappan
Mistaking versatility for universal skills, some companies tend to categorize all software engineers the same not knowing a difference exists. For example, a company may select one of many software engineers to complete a task, later finding that the engineers skills and style do not match those needed to successfully complete that task. This can result in delayed task completion and demonstrates that a one-size fits all concept should not apply to how software engineers work. In order to gain a comprehensive understanding of different software engineers and their working styles we interviewed 21 participants and surveyed 868 software engineers at a large software company and asked them about their work in terms of knowledge worker actions. We identify how tasks, collaboration styles, and perspectives of autonomy can significantly effect different approaches to software engineering work. To characterize differences, we describe empirically informed personas on how they work. Our defined software engineering personas include those with focused debugging abilities, engineers with an active interest in learning, experienced advisors who serve as experts in their role, and more. Our study and results serve as a resource for building products, services, and tools around these software engineering personas.
Proceedings of the ACM on Human-Computer Interaction | 2017
Fannie Liu; Denae Ford; Chris Parnin; Laura Dabbish