Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Wilkerson is active.

Publication


Featured researches published by John Wilkerson.


The Journal of Politics | 2009

Representation and American Governing Institutions

Bryan D. Jones; Heather A. Larsen-Price; John Wilkerson

We isolate two limitations of the existing literature on representation and then move toward some important remedies. The first limitation is that typical representation studies assess the extent to which policymakers’ issue positions correspond to those of the public, but do not investigate whether the issue priorities of policymakers correspond to those of the public. The second limitation is that existing studies do not consider the full policymaking process, from agenda setting to enactment. Using data provided by the Policy Agendas and Congressional Bills Projects, we investigate how well the publics policy priorities have been represented in national policymaking over a 47-year time period. We first assess public concern about 18 major issues using Most Important Problem data (1956–2002) and then correlate these concerns with changing issue attention across 10 policymaking channels that are ordered by differences in institutional friction. We find much closer correspondence where friction is low.


Journal of Information Technology & Politics | 2008

Computer-Assisted Topic Classification for Mixed-Methods Social Science Research

Dustin Hillard; Stephen Purpura; John Wilkerson

ABSTRACT Social scientists interested in mixed-methods research have traditionally turned to human annotators to classify the documents or events used in their analyses. The rapid growth of digitized government documents in recent years presents new opportunities for research but also new challenges. With more and more data coming online, relying on human annotators becomes prohibitively expensive for many tasks. For researchers interested in saving time and money while maintaining confidence in their results, we show how a particular supervised learning system can provide estimates of the class of each document (or event). This system maintains high classification accuracy and provides accurate estimates of document proportions, while achieving reliability levels associated with human efforts. We estimate that it lowers the costs of classifying large numbers of complex documents by 80% or more.


Journal of Curriculum Studies | 2011

Rethinking advanced high school coursework: tackling the depth/breadth tension in the AP US Government and Politics course

Walter C. Parker; Susan Mosborg; John D. Bransford; Nancy Vye; John Wilkerson; Robert D. Abbott

This paper reports a design experiment that attempted to strike a balance between coverage and learning in an exam-oriented, college-preparatory, high school course—Advanced Placement (AP) US Government and Politics. Theoretically, the study provides a conceptual framework for penetrating the depth/breadth tension in such courses, which are known for coverage and perhaps ‘rigour’, but lag behind contemporary research on how people learn and what learning is. Methodologically, the paper details a mixed-methods study of an alternative approach to AP coursework, conducted with 314 students across three high schools. First-year findings indicate that a course of semi-repetitive, content-rich project cycles can lead to same or higher scores on the AP exam along with deeper conceptual learning, but that attention is needed to a collateral problem: orienting students to a new kind of coursework.


Comparative Political Studies | 2011

Comparative Studies of Policy Dynamics

Frank R. Baumgartner; Bryan D. Jones; John Wilkerson

Major new understandings of policy change are emerging from a program to measure attention to policies across nations using the same instrument. Participants in this special issue have created new indicators of government activities in 11 countries over several decades. Each database is comprehensive in that it includes information about every activity of its type (e.g., laws, bills, parliamentary questions, prime ministerial speeches) for the time period covered, typically several decades. These databases are linked by a common policy topic classification system, which allows new types of analyses of public policy dynamics over time. The authors introduce the theoretical and practical questions addressed in the volume, explain the nature of the work completed, and suggest some of the ways that this new infrastructure may allow new types of comparative analyses of public policy, institutions, and outcomes. In particular, the authors challenge political scientists to incorporate policy variability into their analyses and to move far beyond the search for partisan and electoral explanations of policy change.


American Political Science Review | 1999

“Killer” Amendments in Congress

John Wilkerson

For more than three decades, social choice theorists and legislative scholars have studied how legislative outcomes in Congress can be manipulated through strategic amendments and voting. I address the central limitation of this research, a virtual absence of systematic empirical work, by examining 76 “killer” amendments considered during the 103d and 104th congresses. I trace the effects of these amendments on their related bills using archival sources, test for strategic voting using NOMINATE as the baseline measure of legislator preferences across a range of issues, and explore with OLS regression why some killer amendments are more strategically important than others. The findings indicate that successful killer amendments and identifiable strategic voting are extremely rare. In none of the cases examined could the defeat of a bill be attributed to adoption of an alleged killer amendment.


Journal of Information Technology & Politics | 2008

Text Annotation for Political Science Research

Claire Cardie; John Wilkerson

Guest Editors’ Introduction Jo rnal of Information Technology & Politics Digitization is dramatically altering research demands and opportunities in political science, and in the social sciences more generally. To cite just a few examples, the advent of e-government has challenged governments to keep pace with rapidly expanding opportunities for public commenting via e-mail or Web portals during the development of government policy (Balla & Daniels, 2007); the creation of online media has dramatically increased the amount of accessible digital political content and altered the pace and dynamics of political campaigns (Hopkins & King, 2007); governments around the world now release huge volumes of digitized data on a daily basis (e.g., the U.S. Federal Register— http://www.gpoaccess.gov/fr/Index.html), while national projects are digitally scanning vast numbers of historical documents. For example, British parliamentary debates from the 17th century to the present are now accessible online, and ongoing research will extend their availability back to 1066. These developments in data accessibility are creating unprecedented opportunities both to reinvestigate longstanding questions in political science and to embark on the study of new questions. However, a central challenge of working with data of any sort is that it must be organized and classified so that the researcher can use it for the task at hand. In this volume, the data of interest is text. A government agency that receives tens of thousands of comments on a proposed regulation, for example, needs to be able to cull, categorize, and summarize the substantive information contained in those comments in a useful way. A scholar studying campaign coverage on the Internet needs to analyze and organize that coverage to test specific questions about its character. Manual approaches to extracting information from textual data can be challenging for large tasks where resources are limited (as they usually are). Computer-assisted approaches seem to be an attractive alternative: They can enable researchers to complete certain tasks with much greater speed. Nonetheless, it is also important to recognize that faster methods are not necessarily better methods. A computer program might be able to sort public comments by zip code more quickly than, and as accurately as, humans; but humans might be substantially better, albeit slower, at classifying public comments by topic. Ultimately, each manual, automated, or semi-automated method for analyzing textual data has its own set of benefits and costs that vary depending on the task at hand. This special volume of JITP includes eight articles investigating a diverse set of political science tasks, from e-government to political speeches to campaign coverage. The articles nicely illustrate a range of methodological challenges where extracting information from text is concerned. In the process, they also demonstrate the strengths and limitations of alternative text analysis methodologies. Text has always been an important source of data in political science. Text annotation


Journal of Information Technology & Politics | 2012

Tradeoffs in Accuracy and Efficiency in Supervised Learning Methods

Loren Collingwood; John Wilkerson

ABSTRACT Words are an increasingly important source of data for social science research. Automated classification methodologies hold the promise of substantially lowering the costs of analyzing large amounts of text. In this article, we consider a number of questions of interest to prospective users of supervised learning methods, which are used to automatically classify events based on a pre-existing classification system. Although information scientists devote considerable attention to assessing the performance of different supervised learning algorithms and feature representations, the questions asked are often less directly relevant to the more practical concerns of social scientists. The first question prospective social science users are likely to ask is, How well do such methods work? The second is, How much human labeling effort is required? The third is, How do we assess whether virgin cases have been automatically classified with sufficient accuracy? We address these questions in the context of a particular dataset—the Congressional Bills Project—which includes more than 400,000 bill titles that humans have classified into 20 policy topics. This corpus offers an unusual opportunity to assess the performance of different algorithms, the impact of sample size, and the benefits of ensemble learning as a means for estimating classification accuracy.


acm/ieee joint conference on digital libraries | 2014

Detecting and modeling local text reuse

David A. Smith; Ryan Cordell; Elizabeth Maddock Dillon; Nick Stramp; John Wilkerson

Texts propagate through many social networks and provide evidence for their structure. We describe and evaluate efficient algorithms for detecting clusters of reused passages embedded within longer documents in large collections. We apply these techniques to two case studies: analyzing the culture of free reprinting in the nineteenth-century United States and the development of bills into legislation in the U.S. Congress. Using these divergent case studies, we evaluate both the efficiency of the approximate local text reuse detection methods and the accuracy of the results. These techniques allow us to explore how ideas spread, which ideas spread, and which subgroups shared ideas.


computational social science | 2014

Overview of the 2014 NLP Unshared Task in PoliInformatics

Noah A. Smith; Claire Cardie; Anne L. Washington; John Wilkerson

We describe a research activity carried out during January‐April 2014, seeking to increase engagement between the natural language processing research community and social science scholars. In this activity, participants were offered a corpus of text relevant to the 2007‐8 financial crisis and an open-ended prompt. Their responses took the form of a short paper and an optional demonstration, to which a panel of judges will respond with the goal of identifying efforts with the greatest potential for future interdisciplinary collaboration.


Archive | 2008

Comparing Governmental Agendas: Evolution of the Prioritization of Issues in the USA and Spain

Laura Chaqués-Bonafont; Anna M. Palau; Luz Muñoz; John Wilkerson

This paper is the first step in a long term project investigating policy stability and change in Spain from an agenda setting perspective and comparing the Spanish policy agenda to that of other advanced democracies. Here we begin to compare the allocation of issue attention in Spain and the USA by comparing the substance of annual President and Prime Minister speeches from 1982 to 2005. Existing research argues that the public agenda has become more crowded, competitive and volatile in recent years. We find that in both countries there has been a transformation of the political agenda towards an increasing diversity of issues. However, most of the volatility in executive attention seems to be explained by salient events rather than by issue crowding. We conclude by discussing some limitations of executive speeches as a measure of governmental issue attention and directions for future research.

Collaboration


Dive into the John Wilkerson's collaboration.

Top Co-Authors

Avatar

E. Scott Adler

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Bryan D. Jones

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Frank R. Baumgartner

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Lowery

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Gerard Breeman

Wageningen University and Research Centre

View shared research outputs
Top Co-Authors

Avatar

Peter John

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge