Christopher Metz
Cisco Systems, Inc.
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher Metz.
IEEE Internet Computing | 2001
Randall R. Stewart; Christopher Metz
For the past 20 years (1980-2000), applications and end users of the TCP/IP suite have employed one of two protocols: the transmission control protocol or the user datagram protocol. Yet some applications already require greater functionality than what either TCP or UDP has to offer, and future applications might require even more. To extend transport layer functionality, the Internet Engineering Task Force approved the stream control transmission protocol (SCTP) as a proposed standard in October 2000. SUP was spawned from an effort started in the IETF Signaling Transport (Sigtrans) working group to develop a specialized transport protocol for call control signaling in voice-overt (VoIP) networks. Recognizing that other applications could use some of the new protocols capabilities, the IETF now embraces SCTP as a general-purpose transport layer protocol, joining TCP and UDP above the IP layer. Like TCP, STCP offers a point-to-point, connection-oriented, reliable delivery transport service for applications communicating over an IP network.
IEEE Internet Computing | 2000
Rick Boivie; Nancy K. Feldman; Christopher Metz
The Internets global ubiquity has fostered numerous applications that use many different communications models. Applications like FTP Web browsing, and e-mail employ a unicast model where two parties exchange data over logical point-to-point connections. In other applications, such as multiparty audio/video conferencing and collaborative gaming, a source sends data to multiple parties. One way to support multiparty communications is with unicast connections between the source and all of the receivers. If a group has N parties, then a source must set up N-1 unicast connections and transmit the data N times over the network. When N is large, scalability becomes an issue for the source and the network. IP multicast solves this problem by sending a single copy of the data over a distribution tree that is rooted at the source and that branches out to the various destinations. Because the source transmits a single copy of the data, only one copy of the data appears on the branches in the distribution tree.
IEEE Internet Computing | 2003
Christopher Metz
Virtual private networks (VPNs) are discrete network entities configured and operated over a shared network infrastructure. An intranet is a VPN in which all the sites (the customer locations that are part of a VPN) belong to a single organization. An extranet is a VPN with two or more organizations wishing to share (some) information. In the business world, VPNs let corporate locations share information over the Internet. VPN technology is being extended to the home office, providing telecommuters with the networking security and performance commensurate with that available at the office. Service providers are looking at their geographic footprints and their network routing expertise to create and deliver new revenue-generating VPN services. Looking ahead, these provider-provisioned and managed VPNs are intended to emulate whatever local- or wide-area network connectivity customers desire.
IEEE Internet Computing | 2001
Christopher Metz
The Internet is rife with paradox. For example, new optical switches capable of forwarding terabits of data (in photonic format) must work with a decades-old protocol suite first developed for software-controlled electronic packet switches. Another example is that while IP multicast offers by far the most efficient delivery vehicle for large-scale multiparty communications, few service providers deploy it, choosing instead to consume bandwidth and host resources with multiple point-to-point connections. One of the most interesting Internet-related paradoxes is the relationship between Internet service providers. While competing very publicly for customers using price, value-add services, and performance as leverage, they must privately cooperate among themselves to provide global connectivity. Indeed, without this cooperation each ISP network might devolve into its own separate world with few or none of the global Internets benefits. Fortunately, this is not the case; in fact, the Internet is a network of networks-a mesh of separately controlled, interconnected networks that form one large, global entity.
IEEE Internet Computing | 2000
Christopher Metz
Protection and restoration together connote an additional layer of reliability, availability and integrity wherever they are applied. Protection ensures that the desired service will not be permanently disrupted in the event of a component failure. Restoration ensures the desired service will be returned following a component failure. For many years, IP has provided a form of protection and restoration by enabling packets to be dynamically rerouted around link or node failures. Coupled with TCPs reliable transport service, it is easy to see how TCP/IP based networking has achieved a reputation for robustness. The temporal dimension to this IP rerouting mechanism could, however, limit its usefulness for applications with real-time service-level requirements. It takes an IP network some time (usually tens of seconds) to detect a failure, propagate the information to other routers around the network, and then have each router compute a new path. The paper considers how efforts are under way in the research and vendor communities to develop faster and more robust protection and restoration mechanisms for IP networks.
IEEE Internet Computing | 1999
Christopher Metz
The concept of quality of service-that is, the network capability to provide a nondefault service to a subset of the aggregate traffic-has now entered the IP lexicon. The author surveys the history of this development from the Internets original passenger-class-only, best-effort protocol suite. He concludes with a review of the current Internet Engineering Task Forces efforts in the Differentiated Services (DiffServ) working group.
IEEE Internet Computing | 1998
Christopher Metz
IP routing continues to receive much attention from the research and vendor communities. Its primary function-forwarding packets between networks-must keep pace with the demands of the exponentially growing end user population. It must accommodate attachment of gigabit data link technologies such as ATM, packet Sonet, Gigabit Ethernet, and dense wave division multiplexing, and fill those links at full capacity. As network providers introduce new services supporting multicast, QoS, voice, and security, IP routing-and more specifically the IP forwarding function-will be called upon to analyze additional packet information at gigabit rates to determine how each packet should be handled. Performing these new functions while maintaining parity with the advances in available bandwidth will present an interesting challenge for the forwarding capabilities of IP routers. Indeed, for the Internet to scale, we must scale all dimensions of the IP routing process.
IEEE Internet Computing | 2001
Christopher Metz
Over the past several years, traditional carriers and Internet service providers (ISPs) have invested billions of dollars deploying high-speed, high-capacity IP networks. This expansion is intended to lay the foundation for a network that could accommodate exponential traffic growth and deliver new revenue-generating services. Traffic from advanced services incorporating elements such as on-demand video, packet voice, wireless communications, and peer-to-peer networking is expected to consume whatever capacity providers can offer while leading to increased opportunities for revenue growth. The advanced services traffic has yet to materialize. An unintentional consequence of this buildout, however, is that ISP networks possess a glut of capacity. At the same time, ISPs are under great pressure to reduce operational and infrastructure costs while attempting to make money and attract customers with new services. One way to achieve both goals is to carry all traffic over a single IP or multiprotocol label-switching (MPLS) network.
IEEE Internet Computing | 2000
Christopher Metz
Many emerging broadband network technologies are dependent upon terrestrial based physical wiring to support megabit- and gigabit-per-second transmission rates. But the thirst for Internet connectivity and high performance remains unquenched. It cannot, and should not, be constrained by the physical wire. Wireless radio frequency (RF) mechanisms (such as general packet radio service, or GPRS) are becoming more popular for connecting mobile and fixed end users to the Internet, but they include performance and configuration limitations posed by technical, environmental, and geographic factors. An ideal solution would be a broadband wireless technology offering performance comparable to that of terrestrial links while supporting residential, corporate, and ISP network configurations and reaching just about anybody in the world, i.e., satellite communications.
IEEE Internet Computing | 2005
Brian Daugherty; Christopher Metz
Multiprotocol label switching (MPLS) is a tunneling technology used in many service provider networks. The most popular MPLS-enabled application in use today is the MPLS virtual private network, MPLS VPNs were developed to operate over MPLS networks, but they can also run over native IP networks. This offers providers flexibility in network-deployment choices, improved routing system scalability, and greater reach to customers. The key element is the ability to encapsulate MPLS packets in IP tunnels.