Emil Sit
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Emil Sit.
international workshop on peer to peer systems | 2002
Emil Sit; Robert Tappan Morris
Recent peer-to-peer research has focused on providing efficient hash lookup systems that can be used to build more complex systems. These systems have good properties when their algorithms are executed correctly but have not generally considered how to handle misbehaving nodes. This paper looks at what sorts of security problems are inherent in large peer-to-peer systems based on distributed hash lookup systems. We examine the types of problems that such systems might face, drawing examples from existing systems, and propose some design principles for detecting and preventing these problems.
internet measurement conference | 2004
Jaeyeon Jung; Emil Sit
This paper presents quantitative data about SMTP traffic to MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) based on packet traces taken in December 2000 and February 2004. These traces show that the volume of email has increased by 866% between 2000 and 2004. Local mail hosts utilizing black lists generated over 470,000 DNS lookups, which accounts for 14% of all DNS lookups that were observed on the border gateway of CSAIL on a given day in 2004. In comparison, DNS black list lookups accounted for merely 0.4% of lookups in December 2000. The distribution of the number of connections per remote spam source is Zipf-like in 2004, but not so in 2000. This suggests that black lists may be ineffective at fully stemming the tide of spam. We examined seven popular black lists and found that 80% of spam sources we identified are listed in some DNS black list. Some DNS black lists appear to be well-correlated with others, which should be considered when estimating the likelihood that a host is a spam source.
acm special interest group on data communication | 2001
Jaeyeon Jung; Emil Sit; Hari Balakrishnan; Robert Tappan Morris
This paper presents a detailed analysis of traces of DNS and associated TCP traffic collected on the Internet links of the MIT Laboratory for Computer Science and the Korea Advanced Institute of Science and Technology (KAIST). The first part of the analysis details how clients at these institutions interact with the wide-area DNS system, focusing on performance and prevalence of failures. The second part evaluates the effectiveness of DNS caching.In the most recent MIT trace, 23% of lookups receive no answer; these lookups account for more than half of all traced DNS packets since they are retransmitted multiple times. About 13% of all lookups result in an answer that indicates a failure. Many of these failures appear to be caused by missing inverse (IP-to-name) mappings or NS records that point to non-existent or inappropriate hosts. 27% of the queries sent to the root name servers result in such failures.The paper presents trace-driven simulations that explore the effect of varying TTLs and varying degrees of cache sharing on DNS cache hit rates. The results show that reducing the TTLs of address (A) records to as low as a few hundred seconds has little adverse effect on hit rates, and that little benefit is obtained from sharing a forwarding DNS cache among more than 10 or 20 clients. These results suggest that the performance of DNS is not as dependent on aggressive caching as is commonly believed, and that the widespread use of dynamic, low-TTL A-record bindings should not degrade DNS performance.
international workshop on peer to peer systems | 2002
Michael J. Freedman; Emil Sit; Josh Cates; Robert Tappan Morris
We introduce Tarzan, a peer-to-peer anonymous network layer that provides generic IP forwarding. Unlike prior anonymizing layers, Tarzan is flexible, transparent, decentralized, and highly scalable.Tarzan achieves these properties by building anonymous IP tunnels between an open-ended set of peers. Tarzan can provide anonymity to existing applications, such as web browsing and file sharing, without change to those applications. Performance tests show that Tarzan imposes minimal overhead over a corresponding non-anonymous overlay route.
international workshop on peer to peer systems | 2004
Emil Sit; Frank Dabek; James Robertson
UsenetDHT is a system that reduces the storage and bandwidth resources required to run a Usenet server by spreading the burden of data storage across participants. UsenetDHT distributes data using a distributed hash table. The amount of data that must be stored on each node participating in UsenetDHT scales inversely with the number of participating nodes. Each node’s bandwidth requirements are proportional to the fraction of articles read rather than to the total number posted.
networked systems design and implementation | 2004
Frank Dabek; Jinyang Li; Emil Sit; James Robertson; M. Frans Kaashoek; Robert Tappan Morris
networked systems design and implementation | 2006
Byung-Gon Chun; Frank Dabek; Andreas Haeberlen; Emil Sit; Hakim Weatherspoon; M. Frans Kaashoek; John Kubiatowicz; Robert Tappan Morris
usenix security symposium | 2001
Kevin Fu; Emil Sit; Kendra Smith; Nick Feamster
international workshop on peer-to-peer systems | 2006
Emil Sit; Andreas Haeberlen; Frank Dabek; Byung-Gon Chun; Hakim Weatherspoon; Robert Tappan Morris; M. Frans Kaashoek; John Kubiatowicz
international workshop on peer-to-peer systems | 2007
Jeremy Stribling; Emil Sit; M. Frans Kaashoek; Robert Morris