Alfred C. Weaver
University of Virginia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alfred C. Weaver.
IEEE Computer | 2008
Alfred C. Weaver; Benjamin B. Morrison
In the context of todays electronic media, social networking has come to mean individuals using the Internet and Web applications to communicate in previously impossible ways. This is largely the result of a culture-wide paradigm shift in the uses and possibilities of the Internet itself. The current Web is a much different entity than the Web of a decade ago. This new focus creates a riper breeding ground for social networking and collaboration. In an abstract sense, social networking is about everyone. The mass adoption of social-networking Websites points to an evolution in human social interaction.
IEEE Computer | 2008
Andrew D. Jurik; Alfred C. Weaver
The commoditization of computer hardware and software has enabled a new computing paradigm whereby computers will sense, calculate, and act on our behalf, either with or without human interaction as best fits the circumstances. Further, this will occur in an everyday environment, not just when a person is working at a desk. This paradigm shift was made possible by the inexorable increase in computing capabilities as we moved from mainframes (one computer, many people) to the personal computer (one computer, one person) to ubiquitous computing (many computers, one person). It is not uncommon to find a single person managing a desktop PC, laptop, cell phone, PDA, and portable media player. Today, these devices are discrete and managed individually. But as ubiquitous computing evolves, the computers will become both more numerous and less visible; they will be integrated into everyday life in a way that does not call attention to their presence. In the context of medicine, ubiquitous computing presents an exciting challenge and a phenomenal opportunity. Proactive computing is a form of ubiquitous computing in which computers anticipate the needs of people around them. Wearable computing results from placing computers and sensors on the body to create a body area network (BAN) that can sense, process, and report on some set of the wearers attributes. Proactive computing and wearable computing working in tandem let computers fade into the woodwork, enriching quality of life and engendering independence.
Computer Networks and Isdn Systems | 1994
Bert J. Dempsey; Jörg Liebeherr; Alfred C. Weaver
Abstract Distribution of continuous media traffic such as digital audio and video over packet-switching networks has become increasingly feasible due to a number of technology trends leading to powerful desktop computers and high-speed integrated services networks. Protocols supporting the transmission of continuous media are already available. In these protocols, transmission errors due to packet loss are generally not recovered. Instead existing protocol designs focus on preventive error control techniques that reduce the impact of losses by adding redundancy, e.g., forward error correction, or by preventing loss of important data, e.g., channel coding. The goal of this study is to show that retransmission of continuous media data often is, contrary to conventional wisdom, a viable option in most packet-switching networks. If timely retransmission can be performed with a high probability of success, a retrasmission-based approach to error control is attractive because it imposes little overhead on network resources and can be used in conjunction with preventive error control schemes. Since interactive voice has the most stringent delay and error requirements, the study focuses on retransmission in packet voice protocols. An end-to-end model of packet voice transmission is presented and used to investigate the feasibility of retransmission for a wide range of network scenarios. The analytical findings are compared with extrapolations from delay measurements of packet voice transmission over a campus backbone network.
acm special interest group on data communication | 1990
Robert M. Sanders; Alfred C. Weaver
XTP is a reliable, real-time, lightweight transfer s layer protocol being developed by a group of researchers and developers coordinated by Protocol Engines Incorporated (PEI) . [ 1,2,31 Current transport layer protocols such as DoDs Transmission Control Protocol (TCP) 141 and ISOs Transport Protocol (TP) 151 were not designed for th e next generation of high speed, interconnected reliable networks such as FDDI and the gigabit/second wide are a networks . Unlike all previous transport layer protocols, XTP is being designed to be implemented in hardware a s a VLSI chip set . By streamlining the protocol, combining the transport and network layers and utilizing th e increased speed and parallelization possible with a VLSI implementation, XTP will be able to provide the endto-end data transmission rates demanded in high speed networks without compromising reliability an d functionality. This paper describes the operation of the XTP protocol and in particular, its error, flow and rate control, inter-networking addressing mechanisms and multicast support features, as defined in the XTP Protoco l Definition Revision 3 .4 . 11 1
conference on information and knowledge management | 2010
Joel Coffman; Alfred C. Weaver
With regard to keyword search systems for structured data, research during the past decade has largely focused on performance. Researchers have validated their work using ad hoc experiments that may not reflect real-world workloads. We illustrate the wide deviation in existing evaluations and present an evaluation framework designed to validate the next decade of research in this field. Our comparison of 9 state-of-the-art keyword search systems contradicts the retrieval effectiveness purported by existing evaluations and reinforces the need for standardized evaluation. Our results also suggest that there remains considerable room for improvement in this field. We found that many techniques cannot scale to even moderately-sized datasets that contain roughly a million tuples. Given that existing databases are considerably larger than this threshold, our results motivate the creation of new algorithms and indexing techniques that scale to meet both current and future workloads.
computer-based medical systems | 1995
John W. Sublett; Bert J. Dempsey; Alfred C. Weaver
Presents the design and implementation of a digital image capture and distribution system that supports remote ultrasound examinations and, in particular, real-time diagnosis for these examinations. The system was designed in conjunction with radiologists and staff in the Department of Radiology at the University of Virginia Hospital. Based on readily available microcomputer components, our teleultrasound system handles the acquisition, digitizing, and reliable transmission of still and moving images generated by an ultrasound machine. The digital images have a resolution of 640/spl times/480 with an 8-bit color plane, con be captured at rates up to 30 frames/sec, and are compressed and decompressed in real-time using specialized hardware. While scalable to communications networks of any transmission speed, initial deployment is envisioned for 1.5 Mbit/s T-1 leased lines. To achieve real-time still image distribution and to reduce the bandwidth necessary for motion video, the teleultrasound design employs lossy image compression based on the JPEG standard. The effects of JPEG compression on diagnostic quality are being studied in a separate signal detection study with the Department of Radiology at the University of Virginia.<<ETX>>
IEEE Transactions on Knowledge and Data Engineering | 2014
Joel Coffman; Alfred C. Weaver
Extending the keyword search paradigm to relational data has been an active area of research within the database and IR community during the past decade. Many approaches have been proposed, but despite numerous publications, there remains a severe lack of standardization for the evaluation of proposed search techniques. Lack of standardization has resulted in contradictory results from different evaluations, and the numerous discrepancies muddle what advantages are proffered by different approaches. In this paper, we present the most extensive empirical performance evaluation of relational keyword search techniques to appear to date in the literature. Our results indicate that many existing search techniques do not provide acceptable performance for realistic retrieval tasks. In particular, memory consumption precludes many search techniques from scaling beyond small data sets with tens of thousands of vertices. We also explore the relationship between execution time and factors varied in previous evaluations; our analysis indicates that most of these factors have relatively little impact on performance. In summary, our work confirms previous claims regarding the unacceptable performance of these search techniques and underscores the need for standardization in evaluations--standardization exemplified by the IR community.
IEEE Computer | 2006
Alfred C. Weaver
Secure Sockets Layer is a Web-based protocol used for securing data exchanges over the Internet. To understand how SSL does its job, we also must review the two cryptographic techniques on which it relies: symmetric-key and public-key cryptography (PKC). If a particular recipient is intended to decode the ciphertext, the sender and receiver must be using the same cryptographic technique, and they must safeguard a secret - a random number (called a key) in the case of symmetric-key cryptography, or the private key of a public/private key pair in the case of the public-key cryptography. To transport data, large messages are divided into multiple smaller messages with a maximum size of 16 Kbytes. Each message is optionally compressed, then a message authentication code (a hash derived from the plaintext, the two nonces, and the pre master secret) is appended. The plain-text and appended MAC are now encrypted using the negotiated symmetric-key scheme and the computed session key.
international conference on computer communications and networks | 1997
Matthew T. Lucas; Dallas E. Wrege; Bert J. Dempsey; Alfred C. Weaver
A background traffic model is fundamental to packet-level network simulation since the background traffic impacts packet drop rates, queueing delays, end-to-end delay variation, and also determines available network bandwidth. In this paper, we present a statistical characterization of wide-area IP traffic based on 90-minute traces taken from a week-long trace of packets exchanged between a large campus network a state wide educational network, and a large Internet service provider. The results of this analysis can be used to provide a basis for modelling background load in simulations of wide-area packet-switched networks such as the Internet, contribute to understanding the fractal behavior of wide-area network utilization, and provide a benchmark to evaluate the accuracy of existing traffic models. The key findings of our study include the following: (1) both the aggregate packet stream and its component substreams exhibit significant long-range dependencies in agreement with other traffic studies. (2) the empirical probability distributions of packet arrivals are log-normally distributed. (3) packet sizes exhibit only short-term correlations and (4) the packet size distribution and correlation structure are independent from both network utilization and time of day.
IEEE Computer | 2000
Alfred C. Weaver; Ron Vetter; Andrew B. Whinston; Kathleen Swigger
T he year 2000 is a shakeout time for electronic commerce. The “irrational exuberance” (to use Alan Greenspan’s phrase) that fueled the dot-com mania seems to have subsided, and some big-name ventures like Microstrategies and Value America have dropped out of the market as a result. What has taken hold instead is a return to traditional business values, where the ventures reaping the rewards are those that serve a specific, identifiable market, improve the Internet’s infrastructure, or show a clear path to recurring revenues. How will this new trend affect the future? Looking ahead 15 years, which curve in Figure 1 do you think best predicts e-commerce sales? Curve A suggests steady growth and eventual market saturation. Curve B predicts rapid acceptance, followed by a decline and eventual recovery, as might occur because of a huge but one-time glitch—for example, exposure of some major Web merchant’s database of credit card numbers. Curve C depicts slow but steady acceptance. Curve D foretells a catastrophic failure from which ecommerce never recovers—for example, disclosure of some trapdoor in encryption technology. While most pundits may be betting on curve A, this version of the future is possible only if we technologists apply all our skills to avoid the alternatives. Although forecasting the long-term future of e-commerce is difficult, we can say that at least two major research areas will affect the growth—or nongrowth— of Internet businesses over the next three to five years: wireless technology and security. Wireless technology will let us better track individual needs, but it will also generate new concerns over security. The way we handle these concerns is critical to realizing the curve A scenario.