Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kirill Levchenko is active.

Publication


Featured researches published by Kirill Levchenko.


internet measurement conference | 2013

A fistful of bitcoins: characterizing payments among men with no names

Sarah Meiklejohn; Marjori Pomarole; Grant Jordan; Kirill Levchenko; Damon McCoy; Geoffrey M. Voelker; Stefan Savage

Bitcoin is a purely online virtual currency, unbacked by either physical commodities or sovereign obligation; instead, it relies on a combination of cryptographic protection and a peer-to-peer protocol for witnessing settlements. Consequently, Bitcoin has the unintuitive property that while the ownership of money is implicitly anonymous, its flow is globally visible. In this paper we explore this unique characteristic further, using heuristic clustering to group Bitcoin wallets based on evidence of shared authority, and then using re-identification attacks (i.e., empirical purchasing of goods and services) to classify the operators of those clusters. From this analysis, we characterize longitudinal changes in the Bitcoin market, the stresses these changes are placing on the system, and the challenges for those seeking to use Bitcoin for criminal or fraudulent purposes at scale.


internet measurement conference | 2006

Unexpected means of protocol inference

Justin Ma; Kirill Levchenko; Christian Kreibich; Stefan Savage; Geoffrey M. Voelker

Network managers are inevitably called upon to associate network traffic with particular applications. Indeed, this operation is critical for a wide range of management functions ranging from debugging and security to analytics and policy support. Traditionally, managers have relied on application adherence to a well established global port mapping: Web traffic on port 80, mail traffic on port 25 and so on. However, a range of factors - including firewall port blocking, tunneling, dynamic port allocation, and a bloom of new distributed applications - has weakened the value of this approach. We analyze three alternative mechanisms using statistical and structural content models for automatically identifying traffic that uses the same application-layer protocol, relying solely on flow content. In this manner, known applications may be identified regardless of port number, while traffic from one unknown application will be identified as distinct from another. We evaluate each mechanisms classification performance using real-world traffic traces from multiple sites.


ieee symposium on security and privacy | 2011

Click Trajectories: End-to-End Analysis of the Spam Value Chain

Kirill Levchenko; Andreas Pitsillidis; Neha Chachra; Brandon Enright; Mark Felegyhazi; Chris Grier; Tristan Halvorson; Chris Kanich; Christian Kreibich; He Liu; Damon McCoy; Nicholas Weaver; Vern Paxson; Geoffrey M. Voelker; Stefan Savage

Spam-based advertising is a business. While it has engendered both widespread antipathy and a multi-billion dollar anti-spam industry, it continues to exist because it fuels a profitable enterprise. We lack, however, a solid understanding of this enterprises full structure, and thus most anti-Spam interventions focus on only one facet of the overall spam value chain (e.g., spam filtering, URL blacklisting, site takedown).In this paper we present a holistic analysis that quantifies the full set of resources employed to monetize spam email -- including naming, hosting, payment and fulfillment -- usingextensive measurements of three months of diverse spam data, broad crawling of naming and hosting infrastructures, and over 100 purchases from spam-advertised sites. We relate these resources to the organizations who administer them and then use this data to characterize the relative prospects for defensive interventions at each link in the spam value chain. In particular, we provide the first strong evidence of payment bottlenecks in the spam value chain, 95% of spam-advertised pharmaceutical, replica and software products are monetized using merchant services from just a handful of banks.


internet measurement conference | 2011

An analysis of underground forums

Marti Motoyama; Damon McCoy; Kirill Levchenko; Stefan Savage; Geoffrey M. Voelker

Underground forums, where participants exchange information on abusive tactics and engage in the sale of illegal goods and services, are a form of online social network (OSN). However, unlike traditional OSNs such as Facebook, in underground forums the pattern of communications does not simply encode pre-existing social relationships, but instead captures the dynamic trust relationships forged between mutually distrustful parties. In this paper, we empirically characterize six different underground forums --- BlackHatWorld, Carders, HackSector, HackE1ite, Freehack, and L33tCrew --- examining the properties of the social networks formed within, the content of the goods and services being exchanged, and lastly, how individuals gain and lose trust in this setting.


acm special interest group on data communication | 2009

Every microsecond counts: tracking fine-grain latencies with a lossy difference aggregator

Ramana Rao Kompella; Kirill Levchenko; Alex C. Snoeren; George Varghese

Many network applications have stringent end-to-end latency requirements, including VoIP and interactive video conferencing, automated trading, and high-performance computing---where even microsecond variations may be intolerable. The resulting fine-grain measurement demands cannot be met effectively by existing technologies, such as SNMP, NetFlow, or active probing. We propose instrumenting routers with a hash-based primitive that we call a Lossy Difference Aggregator (LDA) to measure latencies down to tens of microseconds and losses as infrequent as one in a million. Such measurement can be viewed abstractly as what we refer to as a coordinated streaming problem, which is fundamentally harder than standard streaming problems due to the need to coordinate values between nodes. We describe a compact data structure that efficiently computes the average and standard deviation of latency and loss rate in a coordinated streaming environment. Our theoretical results translate to an efficient hardware implementation at 40 Gbps using less than 1% of a typical 65-nm 400-MHz networking ASIC. When compared to Poisson-spaced active probing with similar overheads, our LDA mechanism delivers orders of magnitude smaller relative error; active probing requires 50--60 times as much bandwidth to deliver similar levels of accuracy.


acm special interest group on data communication | 2008

Xl: an efficient network routing algorithm

Kirill Levchenko; Geoffrey M. Voelker; Ramamohan Paturi; Stefan Savage

In this paper, we present a new link-state routing algorithm called Approximate Link state (XL) aimed at increasing routing efficiency by suppressing updates from parts of the network. We prove that three simple criteria for update propagation are sufficient to guarantee soundness, completeness and bounded optimality for any such algorithm. We show, via simulation, that XL significantly outperforms standard link-state and distance vector algorithms - in some cases reducing overhead by more than an order of magnitude - while having negligible impact on path length. Finally, we argue that existing link-state protocols, such as OSPF, can incorporate XL routing in a backwards compatible and incrementally deployable fashion.


computer and communications security | 2004

On the difficulty of scalably detecting network attacks

Kirill Levchenko; Ramamohan Paturi; George Varghese

Most network intrusion tools (e.g., Bro) use per-flow state to reassemble TCP connections and fragments in order to detect network attacks (e.g., SYN Flooding or Connection Hijacking) and preliminary reconnaissance (e.g., Port Scans). On the other hand, if network intrusion detection is to be implemented at high speeds at network vantage points, some form of aggregation is necessary. While many security analysts believe that such per-flow state is required for many of these problems, there is no clear proof that this is the case. In fact, a number of problems (such as detecting large traffic footprints or counting identifiers) have scalable solutions. In this paper, we initiate the study of identifying when and how a security attack detection problem can have a scalable solution. We use tools from Communication Complexity to prove that the common formulations of many well-known intrusion detection problems (detecting SYN Flooding, Port Scans, Connection Hijacking, and content matching across fragments) require per-flow state. Our theory exposes assumptions that need to be changed to provide scalable solutions to these problems; we conclude with some systems techniques to circumvent these lower bounds.


computer and communications security | 2015

Security by Any Other Name: On the Effectiveness of Provider Based Email Security

Ian D. Foster; Jon Larson; Max Masich; Alex C. Snoeren; Stefan Savage; Kirill Levchenko

Email as we use it today makes no guarantees about message integrity, authenticity, or confidentiality. Users must explicitly encrypt and sign message contents using tools like PGP if they wish to protect themselves against message tampering, forgery, or eavesdropping. However, few do, leaving the vast majority of users open to such attacks. Fortunately, transport-layer security mechanisms (available as extensions to SMTP, IMAP, POP3) provide some degree of protection against network-based eavesdropping attacks. At the same time, DKIM and SPF protect against network-based message forgery and tampering. In this work we evaluate the security provided by these protocols, both in theory and in practice. Using a combination of measurement techniques, we determine whether major providers supports TLS at each point in their email message path, and whether they support SPF and DKIM on incoming and outgoing mail. We found that while more than half of the top 20,000 receiving MTAs supported TLS, and support for TLS is increasing, servers do not check certificates, opening the Internet email system up to man-in-the-middle eavesdropping attacks. At the same time, while use of SPF is common, enforcement is limited. Moreover, few of the senders we examined used DKIM, and fewer still rejected invalid DKIM signatures. Our findings show that the global email system provides some protection against passive eavesdropping, limited protection against unprivileged peer message forgery, and no protection against active network-based attacks. We observe that protection even against the latter is possible using existing protocols with proper enforcement.


international world wide web conferences | 2017

Tools for Automated Analysis of Cybercriminal Markets

Rebecca S. Portnoff; Sadia Afroz; Greg Durrett; Jonathan K. Kummerfeld; Taylor Berg-Kirkpatrick; Damon McCoy; Kirill Levchenko; Vern Paxson

Underground forums are widely used by criminals to buy and sell a host of stolen items, datasets, resources, and criminal services. These forums contain important resources for understanding cybercrime. However, the number of forums, their size, and the domain expertise required to understand the markets makes manual exploration of these forums unscalable. In this work, we propose an automated, top-down approach for analyzing underground forums. Our approach uses natural language processing and machine learning to automatically generate high-level information about underground forums, first identifying posts related to transactions, and then extracting products and prices. We also demonstrate, via a pair of case studies, how an analyst can use these automated approaches to investigate other categories of products and transactions. We use eight distinct forums to assess our tools: Antichat, Blackhat World, Carders, Darkode, Hack Forums, Hell, L33tCrew and Nulled. Our automated approach is fast and accurate, achieving over 80% accuracy in detecting post category, product, and prices.


international world wide web conferences | 2014

XXXtortion?: inferring registration intent in the .XXX TLD

Tristan Halvorson; Kirill Levchenko; Stefan Savage; Geoffrey M. Voelker

After a decade-long approval process, multiple rejections, and an independent review, ICANN approved the xxx TLD for inclusion in the Domain Name System, to begin general availability on December 6, 2011. Its sponsoring registry proposed it as an expansion of the name space, as well as a way to separate adult from child-appropriate content. Many independent groups, including trademark holders, political groups, and the adult entertainment industry itself, were concerned that it would primarily generate value through defensive and speculative registrations, without actually serving a real need. This paper measures the validity of these concerns using data gathered from ICANN, whois, and Web requests. We use this information to characterize each xxx domain and infer the registrants most likely intent. We find that at most 3.8% of xxx domains host or redirect to potentially legitimate Web content, with the rest generally serving either defensive or speculative purposes. Indeed, registrants spent roughly

Collaboration


Dive into the Kirill Levchenko's collaboration.

Top Co-Authors

Avatar

Stefan Savage

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vern Paxson

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Kanich

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Grier

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge