Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Kanich is active.

Publication


Featured researches published by Chris Kanich.


ieee symposium on security and privacy | 2011

Click Trajectories: End-to-End Analysis of the Spam Value Chain

Kirill Levchenko; Andreas Pitsillidis; Neha Chachra; Brandon Enright; Mark Felegyhazi; Chris Grier; Tristan Halvorson; Chris Kanich; Christian Kreibich; He Liu; Damon McCoy; Nicholas Weaver; Vern Paxson; Geoffrey M. Voelker; Stefan Savage

Spam-based advertising is a business. While it has engendered both widespread antipathy and a multi-billion dollar anti-spam industry, it continues to exist because it fuels a profitable enterprise. We lack, however, a solid understanding of this enterprises full structure, and thus most anti-Spam interventions focus on only one facet of the overall spam value chain (e.g., spam filtering, URL blacklisting, site takedown).In this paper we present a holistic analysis that quantifies the full set of resources employed to monetize spam email -- including naming, hosting, payment and fulfillment -- usingextensive measurements of three months of diverse spam data, broad crawling of naming and hosting infrastructures, and over 100 purchases from spam-advertised sites. We relate these resources to the organizations who administer them and then use this data to characterize the relative prospects for defensive interventions at each link in the spam value chain. In particular, we provide the first strong evidence of payment bottlenecks in the spam value chain, 95% of spam-advertised pharmaceutical, replica and software products are monetized using merchant services from just a handful of banks.


ieee symposium on security and privacy | 2015

Every Second Counts: Quantifying the Negative Externalities of Cybercrime via Typosquatting

Mohammad Taha Khan; Xiang Huo; Zhou Li; Chris Kanich

While we have a good understanding of how cyber crime is perpetrated and the profits of the attackers, the harm experienced by humans is less well understood, and reducing this harm should be the ultimate goal of any security intervention. This paper presents a strategy for quantifying the harm caused by the cyber crime of typo squatting via the novel technique of intent inference. Intent inference allows us to define a new metric for quantifying harm to users, develop a new methodology for identifying typo squatting domain names, and quantify the harm caused by various typo squatting perpetrators. We find that typo squatting costs the typical user 1.3 seconds per typo squatting event over the alternative of receiving a browser error page, and legitimate sites lose approximately 5% of their mistyped traffic over the alternative of an unregistered typo. Although on average perpetrators increase the time it takes a user to find their intended site, many typo squatters actually improve the latency between a typo and its correction, calling into question the necessity of harsh penalties or legal intervention against this flavor of cyber crime.


Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop | 2014

Leveraging Machine Learning to Improve Unwanted Resource Filtering

Sruti Bhagavatula; Christopher W. Dunn; Chris Kanich; Minaxi Gupta; Brian D. Ziebart

Advertisements simultaneously provide both economic support for most free web content and one of the largest annoyances to end users. Furthermore, the modern advertisement ecosystem is rife with tracking methods which violate user privacy. A natural reaction is for users to install ad blockers which prevent advertisers from tracking users or displaying ads. Traditional ad blocking software relies upon hand-crafted filter expressions to generate large, unwieldy regular expressions matched against resources being included within web pages. This process requires a large amount of human overhead and is susceptible to inferior filter generation. We propose an alternate approach which leverages machine learning to bootstrap a superior classifier for ad blocking with less human intervention. We show that our classifier can simultaneously maintain an accuracy similar to the hand-crafted filters while also blocking new ads which would otherwise necessitate further human intervention in the form of additional handmade filter rules.


The Economics of Information Security and Privacy | 2013

Analysis of ecrime in crowd-sourced labor markets: Mechanical turk vs. freelancer

Vaibhav Garg; L. Jean Camp; Chris Kanich

Research in the economics of security has contributed more than a decade of empirical findings to the understanding of the microeconomics of (in)security, privacy, and ecrime. Here we build on insights from previous macro-level research on crime, and microeconomic analyses of ecrime to develop a set of hypotheses to predict which variables are correlated with national participation levels in crowd-sourced ecrime. Some hypotheses appear to hold, e.g. Internet penetration, English literacy, size of the labor market, and government policy all are significant indicators of crowd-sourced ecrime market participation. Greater governmental transparency, less corruption, and more consistent rule of law lower the participation rate in ecrime. Other results are counter-intuitive. GDP per person is not significant, and, unusually for crime, a greater percentage of women does not correlate to decreased crime. One finding relevant to policymaking is that deterring bidders in crowd-sourced labor markets is an ineffective approach to decreasing demand and in turn market size.


internet measurement conference | 2017

Fifteen minutes of unwanted fame: detecting and characterizing doxing

Peter Snyder; Periwinkle Doerfler; Chris Kanich; Damon McCoy

Doxing is online abuse where a malicious party harms another by releasing identifying or sensitive information. Motivations for doxing include personal, competitive, and political reasons, and web users of all ages, genders and internet experience have been targeted. Existing research on doxing is primarily qualitative. This work improves our understanding of doxing by being the first to take a quantitative approach. We do so by designing and deploying a tool which can detect dox files and measure the frequency, content, targets, and effects of doxing on popular dox-posting sites. This work analyzes over 1.7 million text files posted to paste-bin.com, 4chan.org and 8ch.net, sites frequently used to share doxes online, over a combined period of approximately thirteen weeks. Notable findings in this work include that approximately 0.3% of shared files are doxes, that online social networking accounts mentioned in these dox files are more likely to close than typical accounts, that justice and revenge are the most often cited motivations for doxing, and that dox files target males more frequently than females. We also find that recent anti-abuse efforts by social networks have reduced how frequently these doxing victims closed or restricted their accounts after being attacked. We also propose mitigation steps, such a service that can inform people when their accounts have been shared in a dox file, or law enforcement notification tools to inform authorities when individuals are at heightened risk of abuse.


international conference on data mining | 2017

Network Model Selection for Task-Focused Attributed Network Inference

Ivan Brugere; Chris Kanich; Tanya Y. Berger-Wolf

Networks are models representing relationships between entities. Often these relationships are explicitly given, or we must learn a representation which generalizes and predicts observed behavior in underlying individual data (e.g. attributes or labels). Whether given or inferred, choosing the best representation affects subsequent tasks and questions on the network. This work focuses on model selection to evaluate network representations from data, focusing on fundamental predictive tasks on networks. We present a modular methodology using general, interpretable network models, task neighborhood functions found across domains, and several criteria for robust model selection. We demonstrate our methodology on three online user activity datasets and show that network model selection for the appropriate network task vs. an alternate task increases performance by an order of magnitude in our experiments.


internet measurement conference | 2016

Browser Feature Usage on the Modern Web

Peter Snyder; Lara Ansari; Cynthia Taylor; Chris Kanich

Modern web browsers are incredibly complex, with millions of lines of code and over one thousand JavaScript functions and properties available to website authors. This work investigates how these browser features are used on the modern, open web. We find that JavaScript features differ wildly in popularity, with over 50% of provided features never used on the webs 10,000 most popular sites according to Alexa We also look at how popular ad and tracking blockers change the features used by sites, and identify a set of approximately 10% of features that are disproportionately blocked (prevented from executing by these extensions at least 90% of the time they are used). We additionally find that in the presence of these blockers, over 83% of available features are executed on less than 1% of the most popular 10,000 websites. We further measure other aspects of browser feature usage on the web, including how many features websites use, how the length of time a browser feature has been in the browser relates to its usage on the web, and how many security vulnerabilities have been associated with related browser features.


Journal of Cybersecurity | 2016

Characterizing fraud and its ramifications in affiliate marketing networks

Peter Snyder; Chris Kanich

Cookie stuffing is an activity which allows unscrupulous actors online to defraud affiliate marketing programs by causing themselves to receive credit for purchases made by web users, even if the affiliate marketer did not actively perform any marketing for the affiliate program. Using 2 months of HTTP request logs from a large public university, we present an empirical study of fraud in affiliate marketing programs. First, we develop an efficient, decision-tree based technique for detecting cookie-stuffing in HTTP request logs. Our technique replicates domain-informed human labeling of the same data with 93.3% accuracy. Second, we find that over one-third of publishers in affiliate marketing programs use fraudulent cookie-stuffing techniques in an attempt to claim credit from online retailers for illicit referrals. However, most realized conversions are credited to honest publishers. Finally, we present a stake holder analysis of affiliate marketing fraud and find that the costs and rewards of affiliate marketing program are spread across all parties involved in affiliate marketing programs.


acm special interest group on data communication | 2015

High Fidelity, High Risk, High Reward: Using High-Fidelity Networking Data in Ethically Sound Research

Mohammad Taha Khan; Chris Kanich

Network tap data can provide researchers with access to every packet flowing into or out of an organization. However, building a sound ethical framework around using this data is a necessary task before the community can embrace this data source. Here we describe the ethical issues, present example use cases, and suggest strategies for creating a strong ethical footing for this research while maintaining some level of utility to the researchers.


conference on data and application security and privacy | 2015

One Thing Leads to Another: Credential Based Privilege Escalation

Peter Snyder; Chris Kanich

A users primary email account, in addition to being an easy point of contact in our online world, is increasingly being used as a single point of failure for all web security. Features like unlimited message storage, numerous weak password reset features and economically enticing spoils (in the form of financial accounts or personal photos) all add up to an environment where overthrowing someones life via their primary email account is increasingly likely and damaging. We describe an attack we call credential based privilege escalation, and a methodology to evaluate this attacks potential for user harm at web scale. In a study of over 9,000 users we find that, unsurprisingly, access to a vast number of online accounts can be gained by breaking into a users primary email account (even without knowing the email accounts password), but even then the monetizable value in a typical account is relatively low. We also describe future directions in understanding both the technical and human aspects of credential based privilege escalation.

Collaboration


Dive into the Chris Kanich's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Savage

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Snyder

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Vern Paxson

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Damon McCoy

George Mason University

View shared research outputs
Top Co-Authors

Avatar

Mohammad Taha Khan

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge