Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Morrison is active.

Publication


Featured researches published by Patrick Morrison.


mining software repositories | 2013

Is programming knowledge related to age? An exploration of stack overflow

Patrick Morrison; Emerson R. Murphy-Hill

Becoming an expert at programming is thought to take an estimated 10,000 hours of deliberate practice. But what happens after that? Do programming experts continue to develop, do they plateau, or is there a decline at some point? A diversity of opinion exists on this matter, but many seem to think that aging brings a decline in adoption and absorption of new programming knowledge. We develop several research questions on this theme, and draw on data from StackOverflow (SO) to address these questions. The goal of this research is to support career planning and staff development for programmers by identifying age-related trends in SO data. We observe that programmer reputation scores increase relative to age well into the 50s, that programmers in their 30s tend to focus on fewer areas relative to those younger or older in age, and that there is not a strong correlation between age and scores in specific knowledge areas.


international conference on management of data | 2013

Stat!: an interactive analytics environment for big data

Mike Barnett; Badrish Chandramouli; Robert DeLine; Steven M. Drucker; Danyel Fisher; Jonathan Goldstein; Patrick Morrison; John Platt

Exploratory analysis on big data requires us to rethink data management across the entire stack -- from the underlying data processing techniques to the user experience. We demonstrate Stat! -- a visualization and analytics environment that allows users to rapidly experiment with exploratory queries over big data. Data scientists can use Stat! to quickly refine to the correct query, while getting immediate feedback after processing a fraction of the data. Stat! can work with multiple processing engines in the backend; in this demo, we use Stat! with the Microsoft StreamInsight streaming engine. StreamInsight is used to generate incremental early results to queries and refine these results as more data is processed. Stat! allows data scientists to explore data, dynamically compose multiple queries to generate streams of partial results, and display partial results in both textual and visual form.


international conference on software engineering | 2015

Approximating attack surfaces with stack traces

Christopher Theisen; Kim Herzig; Patrick Morrison; Brendan Murphy; Laurie Williams

Security testing and reviewing efforts are a necessity for software projects, but are time-consuming and expensive to apply. Identifying vulnerable code supports decision-making during all phases of software development. An approach for identifying vulnerable code is to identify its attack surface, the sum of all paths for untrusted data into and out of a system. Identifying the code that lies on the attack surface requires expertise and significant manual effort. This paper proposes an automated technique to empirically approximate attack surfaces through the analysis of stack traces. We hypothesize that stack traces from user-initiated crashes have several desirable attributes for measuring attack surfaces. The goal of this research is to aid software engineers in prioritizing security efforts by approximating the attack surface of a system via stack trace analysis. In a trial on Windows 8, the attack surface approximation selected 48.4% of the binaries and contained 94.6% of known vulnerabilities. Compared with vulnerability prediction models (VPMs) run on the entire codebase, VPMs run on the attack surface approximation improved recall from .07 to .1 for binaries and from .02 to .05 for source files. Precision remained at .5 for binaries, while improving from .5 to .69 for source files.


software engineering in health care | 2013

Proposing regulatory-driven automated test suites for electronic health record systems

Patrick Morrison; Casper Holmgreen; Aaron K. Massey; Laurie Williams

In regulated domains such as finance and health care, failure to comply with regulation can lead to financial, civil and criminal penalties. While systems vary from organization to organization, regulations apply across organizations. We propose the use of Behavior-Driven-Development (BDD) scenarios as the basis of an automated compliance test suite for standards such as regulation and interoperability. Such test suites could become a shared asset for use by all systems subject to these regulations and standards. Each system, then, need only create their own system-specific test driver code to automate their compliance checks. The goal of this research is to enable organizations to compare their systems to regulation in a repeatable and traceable way through the use of BDD. To evaluate our proposal, we developed an abbreviated HIPAA test suite and applied it to three open-source electronic health record systems. The scenarios covered all security behavior defined by the selected regulation. The system-specific test driver code covered all security behavior defined in the scenarios, and identified where the tested system lacked such behavior.


international conference on software engineering | 2018

Poster: Identifying Security Issues in Software Development: Are Keywords Enough?

Patrick Morrison; Tosin Daniel Oyetoyan; Laurie Williams

Identifying security issues before attackers do has become a critical concern for software development teams and software users. One approach to identifying security issues in software development artifacts is to use lists of security-related keywords to build classifiers for detecting security issues. However, generic keyword lists may miss project-specific vocabulary. The goal of this research is to support researchers and practitioners in identifying security issues in software development project artifacts by defining and evaluating a systematic scheme for identifying project-specific security vocabularies that can be used for keyword-based classification. We sampled and manually classified 5400 messages from the Apache Derby, Apache Camel, and Dolibarr projects to form an oracle. In addition, we collected each projects publicly disclosed vulnerability data from the CVE and mapped them to the projects dataset to create a CVE-labelled data. We extracted project-specific vocabulary from each project and built classifiers to predict security-related issues in both the oracle and CVE dataset. In our data, we found that the vocabularies of each project included project-specific terms in addition to generic security keywords. Classifiers based on the project-specific security vocabularies increased recall performance by at least double (at varying costs to precision) compared with the previous published keyword lists we evaluated.


Proceedings of the Hot Topics in Science of Security: Symposium and Bootcamp on | 2017

Surveying Security Practice Adherence in Software Development

Patrick Morrison; Benjamin H. Smith; Laurie Williams

Software development teams are increasingly incorporating security practices in to their software development processes. However, little empirical evidence exists on the costs and benefits associated with the application of security practices. Balancing the trade off between the costs in time, effort, and complexity of applying security practices and the benefit of an appropriate level of security in delivered software requires measuring security practice benefits and costs. The goal of this research is to support researcher investigations of software development security practice adherence by building and validating a set of security practices and adherence measures through literature review and survey data analysis. We extracted 16 software development security practices from a review of the literature, and established a set of adherence measures based on technology acceptance theory. We built a survey around the 13 most common practices and our adherence measures. We surveyed 11 security-focused open source projects to collect empirical data as a test of our theorizing about practice adherence. In our collected survey data, each of the 13 security practices we identified was used daily by at least one survey participant. Tracking vulnerabilities and applying secure coding standards are the practices most often applied daily. In our data, Ease of use, Effectiveness, and Training, measured via Likert items, did not always show the expected theoretical relationship with practice use. In our data, Training is positively correlated with practice use, while Effectiveness and Ease of use vary in their correlations with practice use on a practice by practice basis.


international conference on software engineering | 2015

A security practices evaluation framework

Patrick Morrison

Software development teams need guidance on choosing security practices so they can develop code securely. The academic and practitioner literature on software development security practices is large, and expanding. However, published empirical evidence for security practice use in software development is limited and fragmented, making choosing appropriate practices difficult. Measurement frameworks offer a tool for collecting and comparing software engineering data. The goal of this work is to aid software practitioners in evaluating security practice use in the development process by defining and validating a measurement framework for software development security practice use and outcomes. We define the Security Practices Evaluation Framework (SP-EF), a measurement framework for software development security practices. SP-EF supports evidence-based practice selection. To enable comparison of practices across publications and projects, we define an ontology of software development security practices. We evaluate the framework and ontology on historical data and industrial projects.


international conference on software engineering | 2018

Are vulnerabilities discovered and resolved like other defects

Patrick Morrison; Rahul Pandita; Xusheng Xiao; Ram Chillarege; Laurie Williams

Context: Software defect data has long been used to drive software development process improvement. If security defects (i.e.,vulnerabilities) are discovered and resolved by different software development practices than non-security defects, the knowledge of that distinction could be applied to drive process improvement. Objective: The goal of this research is to support technical leaders in making security-specific software development process improvements by analyzing the differences between the discovery and resolution of defects versus that of vulnerabilities. Method: We extend Orthogonal Defect Classification (ODC) [1], a scheme for classifying software defects to support software development process improvement, to study process-related differences between vulnerabilities and defects, creating ODC + Vulnerabilities (ODC+V). We applied ODC+V to classify 583 vulnerabilities and 583 defects across 133 releases of three open-source projects (Firefox, phpMyAdmin, and Chrome). Results: Compared with defects, vulnerabilities are found later in the development cycle and are more likely to be resolved through changes to conditional logic. In Firefox, vulnerabilities are resolved 33% more quickly than defects. From a process improvement perspective, these results indicate opportunities may exist for more efficient vulnerability detection and resolution. Figures 1 and 2 present the percentage of defects and vulnerabilities found in each Activity for Firefox and phpMyAdmin, ordered from left to right as a timeline, first by pre-release, then by postrelease. In these projects, pre-release effort in vulnerability and defect detection correlates with pre-release vulnerability and defect resolution. Conclusion: We found ODC+Vs property of associating vulnerability and defect discovery and resolution events with their software development process contexts helpful for gaining insight into three open source software projects. The addition of the Securitylmpact attribute, in particular, brought visibility into when threat types are discovered during the development process. We would expect use of ODC+V (and of base ODC) periodically over time to be helpful for steering software development projects toward their quality assurance goals. We give our full report in Morrison et al. [2] 1


international conference on software engineering | 2018

What questions do programmers ask about configuration as code

Akond Rahman; Asif Partho; Patrick Morrison; Laurie Williams

Configuration as code (CaC) tools, such as Ansible and Puppet, help software teams to implement continuous deployment and deploy software changes rapidly. CaC tools are growing in popularity, yet what challenges programmers encounter about CaC tools, have not been characterized. A systematic investigation on what questions are asked by programmers, can help us identify potential technical challenges about CaC, and can aid in successful use of CaC tools. The goal of this paper is to help current and potential configuration as code (CaC) adoptees in identifying the challenges related to CaC through an analysis of questions asked by programmers on a major question and answer website. We extract 2,758 Puppet-related questions asked by programmers from January 2010 to December 2016, posted on Stack Overflow. We apply qualitative analysis to identify the questions programmers ask about Puppet. We also investigate the trends in questions with unsatisfactory answers, and changes in question categories over time. From our empirical study, we synthesize 16 major categories of questions. The three most common question categories are: (i) syntax errors, (ii) provisioning instances; and (iii) assessing Puppets feasibility to accomplish certain tasks. Three categories of questions that yield the most unsatisfactory answers are (i) installation, (ii) security, and (iii) data separation.


Information & Software Technology | 2018

Mapping the field of software life cycle security metrics

Patrick Morrison; David Moye; Rahul Pandita; Laurie Williams

Abstract Context: Practitioners establish a piece of software’s security objectives during the software development process. To support control and assessment, practitioners and researchers seek to measure security risks and mitigations during software development projects. Metrics provide one means for assessing whether software security objectives have been achieved. A catalog of security metrics for the software development life cycle could assist practitioners in choosing appropriate metrics, and researchers in identifying opportunities for refinement of security measurement. Objective: The goal of this research is to support practitioner and researcher use of security measurement in the software life cycle by cataloging security metrics presented in the literature, their validation, and the subjects they measure. Method: We conducted a systematic mapping study, beginning with 4818 papers and narrowing down to 71 papers reporting on 324 unique security metrics. For each metric, we identified the subject being measured, how the metric has been validated, and how the metric is used. We categorized the metrics, and give examples of metrics for each category. Results: In our data, 85% of security metrics have been proposed and evaluated solely by their authors, leaving room for replication and confirmation through field studies. Approximately 60% of the metrics have been empirically evaluated, by their authors or by others. The available metrics are weighted heavily toward the implementation and operations phases, with relatively few metrics for requirements, design, and testing phases of software development. Some artifacts and processes remain unmeasured. Measured by phase, Testing received the least attention, with 1.5% of the metrics. Conclusions: At present, the primary application of security metrics to the software development life cycle in the literature is to study the relationship between properties of source code and reported vulnerabilities. The most-cited and most used metric, vulnerability count, has multiple definitions and operationalizations. We suggest that researchers must check vulnerability count definitions when making comparisons between papers. In addition to refining vulnerability measurement, we see research opportunities for greater attention to metrics for the requirement, design, and testing phases of development. We conjecture from our data that the field of software life cycle security metrics has yet to converge on an accepted set of metrics.

Collaboration


Dive into the Patrick Morrison's collaboration.

Top Co-Authors

Avatar

Laurie Williams

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Rahul Pandita

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Aaron K. Massey

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Casper Holmgreen

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

David Moye

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Emerson R. Murphy-Hill

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xusheng Xiao

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge