Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven Bauer is active.

Publication


Featured researches published by Steven Bauer.


pervasive computing and communications | 2004

A user-guided cognitive agent for network service selection in pervasive computing environments

Peyman Faratin; Steven Bauer; John Wroclawski

Connectivity is central to pervasive computing environments. We seek to catalyze a world of rich and diverse connectivity through technologies that drastically simplify the task of providing, choosing, and using wireless network services; creating a new and more competitive environment for these capabilities. A critical requirement is that users actually benefit from this rich environment, rather than simply being overloaded with choices. We address this with an intelligent software agent that transparently and continually chooses from among available network services based on its users individual needs and preferences, while requiring only minimal guidance and user interaction. We present an overview and model of the network service selection problem. We then describe an adaptive user agent that learns its users network service preferences from a very minimal, intuitive set of inputs, and autonomously and continually selects the service that best meets the users needs. Results from preliminary user experiments are presented that demonstrate the effectiveness of our agent.


internet measurement conference | 2011

Measuring the state of ECN readiness in servers, clients,and routers

Steven Bauer; Robert Beverly; Arthur W. Berger

Better exposing congestion can improve traffic management in the wide-area, at peering points, among residential broadband connections, and in the data center. TCPs network utilization and efficiency depends on congestion information, while recent research proposes economic and policy models based on congestion. Such motivations have driven widespread support of Explicit Congestion Notification (ECN)in modern operating systems. We reappraise the Internets ECN readiness, updating and extending previous measurements. Across large and diverse server populations, we find a three-fold increase in ECN support over prior studies. Using new methods, we characterize ECN within mobile infrastructure and at the client-side, populations previously unmeasured. Via large-scale path measurements, we find the ECN feedback loop failing in the core of the network 40% of the time, typically at AS boundaries. Finally, we discover new examples of infrastructure violating ECN Internet standards, and discuss remaining impediments to running ECN while suggesting mechanisms to aid adoption.


adaptive agents and multi-agents systems | 2004

Learning User Preferences for Wireless Services Provisioning

Steven Bauer; Peyman Faratin; John Wroclawski

The problem of interest is how to dynamically allocate wireless access services in a competitive market which implements a take-it-or-leave-it allocation mechanism. In this paper we focus on the subproblem of preference elicitation, given a mechanism. The user, due to a number of cognitive and technical reasons, is assumed to be initially uninformed over their preferences in the wireless domain. The solution we have developed is a closed-loop user-agent system that assists the user in application, task and context dependent service provisioning by adaptively and interactively learning to select the best wireless data service. The agent learns an incrementally revealed user preference model given explicit or implicit feedback on its decisions by the user. We model this closed-loop system as a Markov Decision Process, where the agent actions are rewarded by the user, and show how a reinforcement learning algorithm can be used to learn a model of the userýs preferences on-line in the given allocation mechanism. We evaluate the performance and value of the agent in a series of preliminary empirical user studies.


Telecommunications Policy Research Conference (TPRC) | 2016

Policy Challenges in Mapping Internet Interdomain Congestion

kc claffy; David D. Clark; Steven Bauer; Amogh Dhamdhere

Interconnection links connecting access providers to their peers, transit providers and major content providers are a potential point of discriminatory treatment and impairment of user experience. In the U.S., the FCC has asserted regulatory authority over those links, although they have acknowledged they lack sufficient expertise to develop appropriate regulations thus far. Without a basis of knowledge that relates measurement to justified inferences about actual impairment, different actors can put forward opportunistic interpretations of data to support their points of view.We introduce a topology-aware model of interconnection, to add clarity to a recent proliferation of data and claims, and to elucidate our own beliefs about how to measure interconnection links of access providers, and how policymakers should interpret the results. We use six case studies that span data sets offered by access providers, edge providers, academic researchers, and one mandated by the FCC. This last example reflects our recent experience as the Independent Measurement Experts that worked with the FCC and AT&T in establishing a measurement methodology for reporting on the state of AT&T’s interconnection links. These case studies show how our conceptual model can guide a critical analysis of what is or should be measured and reported, and how to soundly interpret these measurements. We conclude with insights gained in the process of defining the AT&T/DirecTV methodology and in the process of defining and applying our conceptual model.


acm special interest group on data communication | 2003

Future directions in network architecture: (FDNA-03)

Steven Bauer; Xiaowei Yang

The Future Directions in Network Architecture (FDNA) Workshop, a one day workshop held in conjunction with the ACM Sigcomm 2003, provided a forum for participants to predict and consider the architectural underpinnings of future networks and the evolving Internet. The workshop was very well attended, attracting over eighty participants. A total of 48 papers were submitted to the workshop. Joint submission to the workshop and the general Sigcomm 2003 conference was permitted, and roughly half of the papers were dual submissions. Of these submissions, nine full papers and six short talks were selected for presentation. Speakers of the full papers gave 20-25 minute talks, with 5-10 minutes for questions. Short talk speakers were allotted 10 minutes for their talks with 5 minutes for questions. The presented full papers are published in the Sigcomm 2003 consolidated workshop proceedings, available from the ACM Digital Library.


Archive | 2016

Improving the Measurement and Analysis of Gigabit Broadband Networks

Steven Bauer; William Lehr; Merry Mou

Measurements of broadband performance are important for consumers, ISPs, edge providers, and regulators to make informed decisions regarding the choice, design, and regulation of broadband services that are increasingly regarded as essential basic infrastructure. Bauer, Lehr, and Hung (2015) explained how the shift to very high-speed broadband access services poses a challenge for managing end-user performance expectations and for regulatory policy. In this paper, we focus on the measurement challenges, examining existing broadband tests which were designed in a world of lower speed services (10s of Mbps) for their suitability and accuracy when access speeds are measured in the 100Mbps to 1 Gbps. Our analysis highlights the large variability and systematic biases in results depending on which of the many common tests are used. We explain why this variability is observed and offer thoughts on how the measurement infrastructure should be improved in light of the increased availability and use of superfast broadband.


Chapters | 2016

Interconnection in the Internet: peering, interoperability and content delivery

David D. Clark; William Lehr; Steven Bauer

The Internet is a network of networks that realizes its global reach by being able to route data from source nodes on one network to destination nodes that may be across town or on the other side of the globe, and in many cases, are on networks that are owned and operated by different Internet service providers (ISPs). Along the end-to-end path, the data may need to cross the networks of still other ISPs. Supporting the end-to-end, global connectivity, which is a hallmark of the Internet’s value proposition, requires that the ISPs be interconnected both physically (i.e., there exists an electronic pathway for transporting packets) and via business relationships. These business relationships impact both the flow of traffic and the flow of money across the Internet value chain. Historically, most traffic was exchanged between the largest ISPs on the basis of revenue-neutral peering agreements that routed traffic but not dollars across ISP interconnections. The explosive growth of video traffic, the increased socio-economic importance of the Internet, and the rise of business disputes over who should pay for the increased costs of traffic have raised questions about whether the time has now come for Internet interconnection to be regulated. In this chapter, we focus on the growing challenge posed by the rise in traffic/usage-related costs for Internet interconnection – attributable today to the rise in entertainment video traffic from content delivery networks (CDNs) – and what this may mean for policy-makers.


acm special interest group on data communication | 2015

Experience in using MTurk for Network Measurement

Gokay Huz; Steven Bauer; kc claffy; Robert Beverly

Conducting sound measurement studies of the global Internet is inherently difficult. The collected data significantly depends on vantage point(s), sampling strategies, security policies, or measurement populations -- and conclusions drawn from the data can be sensitive to these biases. Crowdsourcing is a promising approach to address these challenges, although the epistemological implications have not yet received substantial attention by the research community. We share our findings from leveraging Amazons Mechanical Turk (MTurk) system for three distinct network measurement tasks. We describe our failure to outsource to MTurk an execution of a security measurement tool, our subsequent successful integration of a simple yet meaningful measurement within a HIT, and finally the successful use of MTurk to quickly provide focused small sample sets that could not be obtained easily via alternate means. Finally, we discuss the implications of our experiences for other crowdsourced measurement research.


conference on steps to reducing unwanted traffic on internet | 2005

The spoofer project: inferring the extent of source address filtering on the internet

Robert Beverly; Steven Bauer


Archive | 2007

Complexity of Internet Interconnections: Technology, Incentives and Implications for Policy

Peyman Faratin; David D. Clark; Steven Bauer; William Lehr

Collaboration


Dive into the Steven Bauer's collaboration.

Top Co-Authors

Avatar

William Lehr

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David D. Clark

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Peyman Faratin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

kc claffy

University of California

View shared research outputs
Top Co-Authors

Avatar

John Wroclawski

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Beverly

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arthur W. Berger

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georgios Smaragdakis

Technical University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge