Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Martin Katz is active.

Publication


Featured researches published by Daniel Martin Katz.


Physica A-statistical Mechanics and Its Applications | 2010

A Mathematical Approach to the Study of the United States Code

Michael James Bommarito; Daniel Martin Katz

The United States Code (Code) is a document containing over 22 million words that represents a large and important source of Federal statutory law. Scholars and policy advocates often discuss the direction and magnitude of changes in various aspects of the Code. However, few have mathematically formalized the notions behind these discussions or directly measured the resulting representations. This paper addresses the current state of the literature in two ways. First, we formalize a representation of the United States Code as the union of a hierarchical network and a citation network over vertices containing the language of the Code. This representation reflects the fact that the Code is a hierarchically organized document containing language and explicit citations between provisions. Second, we use this formalization to measure aspects of the Code as codified in October 2008, November 2009, and March 2010. These measurements allow for a characterization of the actual changes in the Code over time. Our findings indicate that in the recent past, the Code has grown in its amount of structure, interdependence, and language.


Physica A-statistical Mechanics and Its Applications | 2010

Distance Measures for Dynamic Citation Networks

Michael James Bommarito; Daniel Martin Katz; Jonathan L. Zelner; James H. Fowler

Acyclic digraphs arise in many natural and artificial processes. Among the broader set, dynamic citation networks represent an important type of acyclic digraph. For example, the study of such networks includes the spread of ideas through academic citations, the spread of innovation through patent citations, and the development of precedent in common law systems. The specific dynamics that produce such acyclic digraphs not only differentiate them from other classes of graphs, but also provide guidance for the development of meaningful distance measures. In this article, we develop and apply our sink distance measure together with the single-linkage hierarchical clustering algorithm to both a two-dimensional directed preferential attachment model as well as empirical data drawn from the first quarter-century of decisions of the United States Supreme Court. Despite applying the simplest combination of distance measure and clustering algorithm, analysis reveals that more accurate and more interpretable clusterings are produced by this scheme.


Artificial Intelligence and Law | 2014

Measuring the complexity of the law: the United States Code

Daniel Martin Katz; Michael James Bommarito

Abstract Einstein’s razor, a corollary of Ockham’s razor, is often paraphrased as follows: make everything as simple as possible, but not simpler. This rule of thumb describes the challenge that designers of a legal system face—to craft simple laws that produce desired ends, but not to pursue simplicity so far as to undermine those ends. Complexity, simplicity’s inverse, taxes cognition and increases the likelihood of suboptimal decisions. In addition, unnecessary legal complexity can drive a misallocation of human capital toward comprehending and complying with legal rules and away from other productive ends. While many scholars have offered descriptive accounts or theoretical models of legal complexity, most empirical research to date has been limited to simple measures of size, such as the number of pages in a bill. No extant research rigorously applies a meaningful model to real data. As a consequence, we have no reliable means to determine whether a new bill, regulation, order, or precedent substantially effects legal complexity. In this paper, we begin to address this need by developing a proposed empirical framework for measuring relative legal complexity. This framework is based on “knowledge acquisition”, an approach at the intersection of psychology and computer science, which can take into account the structure, language, and interdependence of law. We then demonstrate the descriptive value of this framework by applying it to the U.S. Code’s Titles, scoring and ranking them by their relative complexity. We measure various features of a title including its structural size, the net flow of its intra-title citations and its linguistic entropy. Our framework is flexible, intuitive, and transparent, and we offer this approach as a first step in developing a practical methodology for assessing legal complexity.


Science | 2017

Harnessing legal complexity

J. B. Ruhl; Daniel Martin Katz; Michael James Bommarito

Bring tools of complexity science to bear on improving law Complexity science has spread from its origins in the physical sciences into biological and social sciences (1). Increasingly, the social sciences frame policy problems from the financial system to the food system as complex adaptive systems (CAS) and urge policy-makers to design legal solutions with CAS properties in mind. What is often poorly recognized in these initiatives is that legal systems are also complex adaptive systems (2). Just as it seems unwise to pursue regulatory measures while ignoring known CAS properties of the systems targeted for regulation, so too might failure to appreciate CAS qualities of legal systems yield policies founded upon unrealistic assumptions. Despite a long empirical studies tradition in law, there has been little use of complexity science. With few robust empirical studies of legal systems as CAS, researchers are left to gesture at seemingly evident assertions, with limited scientific support. We outline a research agenda to help fill this knowledge gap and advance practical applications.


PLOS ONE | 2017

A General Approach for Predicting the Behavior of the Supreme Court of the United States

Daniel Martin Katz; Michael James Bommarito; Josh Blackman

Building on developments in machine learning and prior work in the science of judicial prediction, we construct a model designed to predict the behavior of the Supreme Court of the United States in a generalized, out-of-sample context. To do so, we develop a time-evolving random forest classifier that leverages unique feature engineering to predict more than 240,000 justice votes and 28,000 cases outcomes over nearly two centuries (1816-2015). Using only data available prior to decision, our model outperforms null (baseline) models at both the justice and case level under both parametric and non-parametric tests. Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently, over the past century, we outperform an in-sample optimized null model by nearly 5%. Our performance is consistent with, and improves on the general level of prediction demonstrated by prior work; however, our model is distinctive because it can be applied out-of-sample to the entire past and future of the Court, not a single term. Our results represent an important advance for the science of quantitative legal prediction and portend a range of other potential applications.


arXiv: Physics and Society | 2017

Law on the Market? Abnormal Stock Returns and Supreme Court Decision-Making

Daniel Martin Katz; Michael James Bommarito; Tyler Soellinger; James Ming Chen

Do judicial decisions affect the securities markets in discernible and perhaps predictable ways? In other words, is there “law on the market” (LOTM)? This is a question that has been raised by commentators, but answered by very few in a systematic and financially rigorous manner. Using intraday data and a multiday event window, this large scale event study seeks to determine the existence, frequency and magnitude of equity market impacts flowing from Supreme Court decisions. We demonstrate that, while certainly not present in every case, ”law on the market” events are fairly common. Across all cases decided by the Supreme Court of the United States between the 1999-2013 terms, we identify 79 cases where the share price of one or more publicly traded company moved in direct response to a Supreme Court decision. In the aggregate, over fifteen years, Supreme Court decisions were responsible for more than 140 billion dollars in absolute changes in wealth. Our analysis not only contributes to our understanding of the political economy of judicial decision making, but also links to the broader set of research exploring the performance in financial markets using event study methods. We conclude by exploring the informational efficiency of law as a market by highlighting the speed at which information from Supreme Court decisions is assimilated by the market. Relatively speaking, LOTM events have historically exhibited slow rates of information incorporation for affected securities. This implies a market ripe for arbitrage where an event-based trading strategy could be successful.


arXiv: Computation and Language | 2018

OpenEDGAR: Open Source Software for SEC EDGAR Analysis

Michael James Bommarito; Daniel Martin Katz; Eric Detterman

OpenEDGAR is an open source Python framework designed to rapidly construct research databases based on the Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system operated by the US Securities and Exchange Commission (SEC). OpenEDGAR is built on the Django application framework, supports distributed compute across one or more servers, and includes functionality to (i) retrieve and parse index and filing data from EDGAR, (ii) build tables for key metadata like form type and filer, (iii) retrieve, parse, and update CIK to ticker and industry mappings, (iv) extract content and metadata from filing documents, and (v) search filing document contents. OpenEDGAR is designed for use in both academic research and industrial applications, and is distributed under MIT License at this https URL.


arXiv: Computation and Language | 2018

LexNLP: Natural Language Processing and Information Extraction For Legal and Regulatory Texts

Michael James Bommarito; Daniel Martin Katz; Eric Detterman

LexNLP is an open source Python package focused on natural language processing and machine learning for legal and regulatory text. The package includes functionality to (i) segment documents, (ii) identify key text such as titles and section headings, (iii) extract over eighteen types of structured information like distances and dates, (iv) extract named entities such as companies and geopolitical entities, (v) transform text into features for model training, and (vi) build unsupervised and supervised models such as word embedding or tagging models. LexNLP includes pre-trained models based on thousands of unit tests drawn from real documents available from the SEC EDGAR database as well as various judicial and regulatory proceedings. LexNLP is designed for use in both academic research and industrial applications.


Ohio St. L.J. | 2008

Hustle and Flow: A Social Network Analysis of the American Federal Judiciary

Daniel Martin Katz; Derek Stafford


arXiv: Physics and Society | 2014

Predicting the Behavior of the Supreme Court of the United States: A General Approach

Daniel Martin Katz; Michael James Bommarito; Josh Blackman

Collaboration


Dive into the Daniel Martin Katz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon Zelner

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Josh Blackman

South Texas College of Law

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan L. Zelner

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Adam Candeub

Michigan State University

View shared research outputs
Top Co-Authors

Avatar

Eitan Ingall

Children's Hospital of Philadelphia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge