Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kurt Thomas is active.

Publication


Featured researches published by Kurt Thomas.


internet measurement conference | 2011

Suspended accounts in retrospect: an analysis of twitter spam

Kurt Thomas; Chris Grier; Dawn Song; Vern Paxson

In this study, we examine the abuse of online social networks at the hands of spammers through the lens of the tools, techniques, and support infrastructure they rely upon. To perform our analysis, we identify over 1.1 million accounts suspended by Twitter for disruptive activities over the course of seven months. In the process, we collect a dataset of 1.8 billion tweets, 80 million of which belong to spam accounts. We use our dataset to characterize the behavior and lifetime of spam accounts, the campaigns they execute, and the wide-spread abuse of legitimate web services such as URL shorteners and free web hosting. We also identify an emerging marketplace of illegitimate programs operated by spammers that include Twitter account sellers, ad-based URL shorteners, and spam affiliate programs that help enable underground market diversification. Our results show that 77% of spam accounts identified by Twitter are suspended within on day of their first tweet. Because of these pressures, less than 9% of accounts form social relationships with regular Twitter users. Instead, 17% of accounts rely on hijacking trends, while 52% of accounts use unsolicited mentions to reach an audience. In spite of daily account attrition, we show how five spam campaigns controlling 145 thousand accounts combined are able to persist for months at a time, with each campaign enacting a unique spamming strategy. Surprisingly, three of these campaigns send spam directing visitors to reputable store fronts, blurring the line regarding what constitutes spam on social networks.


ieee symposium on security and privacy | 2011

Design and Evaluation of a Real-Time URL Spam Filtering Service

Kurt Thomas; Chris Grier; Justin Ma; Vern Paxson; Dawn Song

On the heels of the widespread adoption of web services such as social networks and URL shorteners, scams, phishing, and malware have become regular threats. Despite extensive research, email-based spam filtering techniques generally fall short for protecting other web services. To better address this need, we present Monarch, a real-time system that crawls URLs as they are submitted to web services and determines whether the URLs direct to spam. We evaluate the viability of Monarch and the fundamental challenges that arise due to the diversity of web service spam. We show that Monarch can provide accurate, real-time protection, but that the underlying characteristics of spam do not generalize across web services. In particular, we find that spam targeting email qualitatively differs in significant ways from spam campaigns targeting Twitter. We explore the distinctions between email and Twitter spam, including the abuse of public web hosting and redirector services. Finally, we demonstrate Monarchs scalability, showing our system could protect a service such as Twitter -- which needs to process 15 million URLs/day -- for a bit under


international conference on malicious and unwanted software | 2010

The Koobface botnet and the rise of social malware

Kurt Thomas; David M. Nicol

800/day.


privacy enhancing technologies | 2010

unfriendly: multi-party privacy risks in social networks

Kurt Thomas; Chris Grier; David M. Nicol

As millions of users flock to online social networks, sites such as Facebook and Twitter are becoming increasingly attractive targets for spam, phishing, and malware. The Koobface botnet in particular has honed its efforts to exploit social network users, leveraging zombies to generate accounts, befriend victims, and to send malware propagation spam. In this paper, we explore Koobfaces zombie infrastructure and analyze one month of the botnets activity within both Facebook and Twitter. Constructing a zombie emulator, we are able to infiltrate the Koobface botnet to discover the identities of fraudulent and compromised social network accounts used to distribute malicious links to over 213,000 social network users, generating over 157,000 clicks. Despite the use of domain blacklisting services by social network operators to filter malicious links, current defenses recognize only 27% of threats and take on average 4 days to respond. During this period, 81% of vulnerable users click on Koobface spam, highlighting the ineffectiveness of blacklists.


computer and communications security | 2014

Consequences of Connectivity: Characterizing Account Hijacking on Twitter

Kurt Thomas; Frank Li; Chris Grier; Vern Paxson

As the popularity of social networks expands, the information users expose to the public has potentially dangerous implications for individual privacy. While social networks allow users to restrict access to their personal data, there is currently no mechanism to enforce privacy concerns over content uploaded by other users. As group photos and stories are shared by friends and family, personal privacy goes beyond the discretion of what a user uploads about himself and becomes an issue of what every network participant reveals. In this paper, we examine how the lack of joint privacy controls over content can inadvertently reveal sensitive information about a user including preferences, relationships, conversations, and photos. Specifically, we analyze Facebook to identify scenarios where conflicting privacy settings between friends will reveal information that at least one user intended remain private. By aggregating the information exposed in this manner, we demonstrate how a users private attributes can be inferred from simply being listed as a friend or mentioned in a story. To mitigate this threat, we show how Facebooks privacy model can be adapted to enforce multi-party privacy. We present a proof of concept application built into Facebook that automatically ensures mutually acceptable privacy restrictions are enforced on group content.


ieee symposium on security and privacy | 2015

Ad Injection at Scale: Assessing Deceptive Advertisement Modifications

Kurt Thomas; Elie Bursztein; Chris Grier; Grant Ho; Nav Jagpal; Alexandros Kapravelos; Damon McCoy; Antonio Nappa; Vern Paxson; Paul Pearce; Niels Provos; Moheeb Abu Rajab

In this study we expose the serious large-scale threat of criminal account hijacking and the resulting damage incurred by users and web services. We develop a system for detecting large-scale attacks on Twitter that identifies 14 million victims of compromise. We examine these accounts to track how attacks spread within social networks and to determine how criminals ultimately realize a profit from hijacked credentials. We find that compromise is a systemic threat, with victims spanning nascent, casual, and core users. Even brief compromises correlate with 21% of victims never returning to Twitter after the service wrests control of a victims account from criminals. Infections are dominated by social contagions---phishing and malware campaigns that spread along the social graph. These contagions mirror information diffusion and biological diseases, growing in virulence with the number of neighboring infections. Based on the severity of our findings, we argue that early outbreak detection that stems the spread of compromise in 24 hours can spare 70% of victims.


international world wide web conferences | 2016

Remedying Web Hijacking: Notification Effectiveness and Webmaster Comprehension

Frank Li; Grant Ho; Eric Kuan; Yuan Niu; Lucas Ballard; Kurt Thomas; Elie Bursztein; Vern Paxson

Today, web injection manifests in many forms, but fundamentally occurs when malicious and unwanted actors tamper directly with browser sessions for their own profit. In this work we illuminate the scope and negative impact of one of these forms, ad injection, in which users have ads imposed on them in addition to, or different from, those that websites originally sent them. We develop a multi-staged pipeline that identifies ad injection in the wild and captures its distribution and revenue chains. We find that ad injection has entrenched itself as a cross-browser monetization platform impacting more than 5% of unique daily IP addresses accessing Google -- tens of millions of users around the globe. Injected ads arrive on a clients machine through multiple vectors: our measurements identify 50,870 Chrome extensions and 34,407 Windows binaries, 38% and 17% of which are explicitly malicious. A small number of software developers support the vast majority of these injectors who in turn syndicate from the larger ad ecosystem. We have contacted the Chrome Web Store and the advertisers targeted by ad injectors to alert each of the deceptive practices involved.


computer and communications security | 2014

Dialing Back Abuse on Phone Verified Accounts

Kurt Thomas; Dmytro Iatskiv; Elie Bursztein; Tadek Pietraszek; Chris Grier; Damon McCoy

As miscreants routinely hijack thousands of vulnerable web servers weekly for cheap hosting and traffic acquisition, security services have turned to notifications both to alert webmasters of ongoing incidents as well as to expedite recovery. In this work we present the first large-scale measurement study on the effectiveness of combinations of browser, search, and direct webmaster notifications at reducing the duration a site remains compromised. Our study captures the life cycle of 760,935 hijacking incidents from July, 2014--June, 2015, as identified by Google Safe Browsing and Search Quality. We observe that direct communication with webmasters increases the likelihood of cleanup by over 50% and reduces infection lengths by at least 62%. Absent this open channel for communication, we find browser interstitials---while intended to alert visitors to potentially harmful content---correlate with faster remediation. As part of our study, we also explore whether webmasters exhibit the necessary technical expertise to address hijacking incidents. Based on appeal logs where webmasters alert Google that their site is no longer compromised, we find 80% of operators successfully clean up symptoms on their first appeal. However, a sizeable fraction of site owners do not address the root cause of compromise, with over 12% of sites falling victim to a new attack within 30 days. We distill these findings into a set of recommendations for improving web security and best practices for webmasters.


computer and communications security | 2017

Data Breaches, Phishing, or Malware?: Understanding the Risks of Stolen Credentials

Kurt Thomas; Frank Li; Ali Zand; Jacob Barrett; Juri Ranieri; Luca Invernizzi; Yarik Markov; Oxana Comanescu; Vijay Eranti; Angelika Moscicki; Daniel Margolis; Vern Paxson; Elie Bursztein

In the past decade the increase of for-profit cybercrime has given rise to an entire underground ecosystem supporting large-scale abuse, a facet of which encompasses the bulk registration of fraudulent accounts. In this paper, we present a 10 month logitudinal study of the underlying technical and financial capabilities of criminals who register phone verified accounts (PVA). To carry out our study, we purchase 4,695 Google PVA as well as acquire a random sample of 300,000 Google PVA through a collaboration with Google. We find that miscreants rampantly abuse free VOIP services to circumvent the intended cost of acquiring phone numbers, in effect undermining phone verification. Combined with short lived phone numbers from India and Indonesia that we suspect are tied to human verification farms, this confluence of factors correlates with a market-wide price drop of 30--40% for Google PVA until Google penalized verifications from frequently abused carriers. We distill our findings into a set of recommendations for any services performing phone verification as well as highlight open challenges related to PVA abuse moving forward.


ieee symposium on security and privacy | 2016

Cloak of Visibility: Detecting When Machines Browse a Different Web

Luca Invernizzi; Kurt Thomas; Alexandros Kapravelos; Oxana Comanescu; Jean Michel Picod; Elie Bursztein

In this paper, we present the first longitudinal measurement study of the underground ecosystem fueling credential theft and assess the risk it poses to millions of users. Over the course of March, 2016--March, 2017, we identify 788,000 potential victims of off-the-shelf keyloggers; 12.4 million potential victims of phishing kits; and 1.9 billion usernames and passwords exposed via data breaches and traded on blackmarket forums. Using this dataset, we explore to what degree the stolen passwords---which originate from thousands of online services---enable an attacker to obtain a victims valid email credentials---and thus complete control of their online identity due to transitive trust. Drawing upon Google as a case study, we find 7--25% of exposed passwords match a victims Google account. For these accounts, we show how hardening authentication mechanisms to include additional risk signals such as a users historical geolocations and device profiles helps to mitigate the risk of hijacking. Beyond these risk metrics, we delve into the global reach of the miscreants involved in credential theft and the blackhat tools they rely on. We observe a remarkable lack of external pressure on bad actors, with phishing kit playbooks and keylogger capabilities remaining largely unchanged since the mid-2000s.

Collaboration


Dive into the Kurt Thomas's collaboration.

Top Co-Authors

Avatar

Chris Grier

University of California

View shared research outputs
Top Co-Authors

Avatar

Vern Paxson

University of California

View shared research outputs
Top Co-Authors

Avatar

Damon McCoy

George Mason University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lucas Ballard

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Frank Li

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge