Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shukri Wakid is active.

Publication


Featured researches published by Shukri Wakid.


ACM Standardview | 1997

Trust and traceability in electronic commerce

Dennis D. Steinauer; Shukri Wakid; Stanley D. Rasberry

Ⅵ Electronic commerce (EC) will modify some of the traditional models for the conduct of business. However, it is important that many of the long-standing elements of commerce be replicated in the electronic world. Commerce, electronic or otherwise, requires several elements: trading partners, goods and services, units of exchange (money), transaction infrastructures, and delivery and distribution mechanisms. These elements have been developed over centuries of legal, governmental, technological, and commercial practices and have resulted in a business infrastructure that people understand and trust. We explore two important elements of that infrastructure, trust and traceability, in the context of the evolving EC infrastructure. We look at a number of trust enhancers, i.e., technology or other processes that can help increase the level of confidence that people have in electronic commerce. We also examine the concept of traceability, an important trust enhancer, in detail. Finally, we discuss some specific technologies that can increase the overall level of trust in electronic commerce. ommercial practice and common law, developed over several thousand years, provide the context into which electronic commerce must fit, if it is to succeed. Early commerce, one step above the simplest trading of two articles between two people, was conducted face-to-face, possibly in front of a witness for the more complex transactions. The advent of reliable mail service in the eighteenth century, the telegraph in the nineteenth century, and the spread of telephones in the twentieth allowed commerce to be conducted on a remote basis. Computer-based commerce via networks such as the Internet is simply one more step in that evolution. Prior to the era of remote transactions, money was basically precious metal. The Pound Sterling was exactly that. Because transporting large amounts of precious metal in the service of remote trading was both labor intensive and hazardous, improvements to the banking system were needed to permit keeping of accounts and issuing letters of credit, drafts, checks, and vouchers of various kinds. Money, over the last century, has become disconnected from any underlying metal, and is essentially based on trust in the stability of the issuing country. In recent times financial accounts have been kept almost exclusively on computer based systems. Despite pressures for rapid development of electronic methods of conducting traditional business activities , the underlying structures, relationships, conventions , and methods of traditional methods will remain the dominant way of doing business. Electronic methods must be developed to coexist …


ieee computer society workshop on future trends of distributed computing systems | 1999

Priority scheduling and buffer management for ATM traffic shaping

Todd Lizambri; Fernando Duran; Shukri Wakid

The impact of buffer management and priority scheduling is examined in stressful scenarios when the aggregate incoming traffic is higher than the output link capacity of an Asynchronous Transfer Mode (ATM) traffic shaper. To simultaneously reduce cell loss and extreme delay behavior for two or more classes of service, we show that a dynamic priority scheme is required. We propose a scheduling algorithm where the priority of different service queues is dynamically modified to allow for the provisioning of isochronous services on one of the queues. Buffer management ensures that all service queues are guaranteed a minimum amount of memory, yet available memory can be shared between service queues when necessary. This approach guarantees that no cells are lost under strain conditions until all buffer is exhausted.


IEEE Transactions on Communications | 1994

A comparison of FDDI asynchronous mode and DQDB queue arbitrated mode data transmission for metropolitan area network applications

William E. Burr; Shukri Wakid; Xiaomei Qian; Dhadesugoor R. Vaman

The performance of the FDDI token ring and IEEE 802.6 DQDB protocols are compared using discrete event simulation models. A metropolitan area network (MAN) of 100 km and with 50 stations was modeled. A 100 Mbps channel is used for both networks with a traffic model with large (1 kbyte) low priority packets and smaller (100 byte) high priority packets. The delay and fairness characteristics of both networks are analyzed. The simulations show that FDDI has advantages in fairness and maximum capacity, while DQDB offers lower delay at all except very heavy loads and has a stronger priority mechanism. >


Journal of High Speed Networks | 1994

Hardware Measurement Techniques for High-Speed Networks

Alan Mink; Yves A. Fouquet; Shukri Wakid

The utilization of the inherent bandwidth in high-speed networks is often obstructed by the implementation of the protocols. NIST and ARPA developed VLSI measurement components that permit researchers and product developers to better understand computing and communication bottlenecks by accurate measurement techniques. To illustrate and promote the use of such public domain components, this paper uses image transfer as an example application via a commercial UNIX (UNIX is a trademark of X/Open) (BSD 4.3.1 variant) implementation of the TCP/IP protocol over an Ethernet (10 Mbits/s), and a HIPPI (800 Mbits/s) medium. The only modification done to this commercially available kernel was to add the simple probe code necessary to interface to the NIST MultiKron (MultiKron is a registered trademark of NIST) measurement chip. The accurate data obtained for the various aspects of this protocol implementation clearly illustrate the major bottlenecks, provide insight with regard to scaling, and determine upper bounds on performance. This paper does not, however, attempt to draw any conclusions with regard to the abstract TCP/IP protocol itself or the generic services of the UNIX operating system, rather it provides a system-wide perspective on how a wide range of design options and implementation parameters impact the effective throughput to the application.


international symposium on object component service oriented real time distributed computing | 2000

TCP throughput and buffer management

Todd Lizambri; Fernando Duran; Shukri Wakid

There have been may debates about the feasibility of providing guaranteed quality of service (QoS) when network traffic travels beyond the enterprise domain and into the vast unknown of the Internet. Many mechanisms have been proposed to bring QoS to TCP/IP and the Internet (RSVP, DiffServ, 802.1p). However, until these techniques and the equipment to support them become ubiquitous, most enterprises will rely on local prioritization of the traffic to obtain the best performance for mission-critical and time-sensitive applications. This paper explores prioritizing critical TCP/IP traffic using a multi-queue buffer management strategy that becomes biased against random low-priority flows and remains biased while congestion exists in the network. This biasing implies a degree of unfairness but proves to be more advantageous to the overall throughput of the network than strategies that attempt to be fair. Only two classes of service are considered, where TCP connections are assigned to these classes and mapped to two underlying queues with round-robin scheduling and shared memory. In addition to improving the throughput, cell losses are minimized for the class of service (queue) with the higher priority.


ACM Standardview | 1997

Measurement-based standards for future information technology systems

Shukri Wakid; Shirley M. Radack

Ⅵ Information technology is undergoing rapid and constant change to an extent that cannot be matched by any other technology. For the past two decades, there have been continual, dramatic increases in performance and function-ality, accompanied by significantly decreasing prices. Rapid change and innovation have affected almost all areas of human endeavor, and has enabled the development of new industries , products, and services. nformation technology functionality provided by desktop computers is becoming embedded in network services, consumer products, household appliances, and automobiles. Embedded computers systems that integrate data collected by sensors are beginning to influence many aspects of daily life, such as climate and security controls for homes and automobiles. and received in a common form or language , and thereby enables multiple functions to come together on common platforms (Cross Industry Working Teams report on Evolving the NII: A Cross Industry Vision). Increased computational power and bandwidth are leading to new applications combining multiple functions such as electronic commerce, search and retrieval of multimedia information from digital libraries, and the integration of design, ordering, and manufacturing processes. Interfaces between the parts of information technology systems are becoming more open, thus enabling users to interconnect the hardware, software, and communications products of different vendors. In early 1996, there were an estimated 330 million personal computers in use worldwide; more than half of which had access to the Internet. People have so far been willing to upgrade and change their computers as the technology changes—an estimated million and a half computers are given away every month. Information technology systems are widely distributed throughout the world, and tens of millions of people have started to access information through computer networks. People throughout the world are closer than ever before to communications facilities. About half of the worlds population is, on the average , only two hours away from a telephone. The rapidly deployed wireless technology is likely to provide accelerated growth in online services around the world, and is expected to overshadow currently dominant voice traffic.


IEEE Transactions on Communications | 1994

Provision of isochronous service on IEEE 802.6

Xiaomei Qian; Sharad Kumar; Dhadesugoor R. Vaman; Shukri Wakid; David Cypher

The author concentrate on the provision of isochronous services to users via the existing IEEE 802.6 framework using Q.931 as the signalling protocol for providing call control functions. They present possible scenarios of isochronous service provisioning via the IEEE 802.6 network. Real time performance for call setup procedures is simulated and analyzed with different priority schemes for transferring signalling messages. The results show that using a prioritized queue arbitrated (QA) bandwidth to send signalling messages greatly reduces call setup delay. Call setup delays are determined using both Poisson-type and bursty type data traffic models. The positioning of the bandwidth manager, the virtual channel identifier (VCI) server and the Q.931 signalling termination is shown to influence call setup performance. >


Journal of Network and Systems Management | 2001

Thresholds, edited by Lawrence Bernstein: Performance Effects of Voice and Data Convergence

Todd Lizambri; Shukri Wakid; Fernando Duran

The convergence of voice and data services onto a single network introduces new complex challenges for the network designers and system administrators. This work highlights the effect of these two services upon one another. We see that introducing data traffic with Fixed Priority scheduling in the network access device does not affect the voice transmissions, but does require sufficient buffer space in the device so as not to loose data traffic. We also show that increasing the number of voice conversations with a Round Robin scheduling policy will severely impact the nonvoice data by degrading the Transmission Control Protocol (TCP) throughput because of cell loss and data retransmission. The scheduling policies and buffer sizes, as well as other parameters not studied in our investigation are parameters that may be adjusted to produce the desired balance between voice quality and TCP throughput. Voice traffic, being isochronous in nature, has demands for a certain degree of quality-of-service. More specifically, bounds on delay and delay variance must be preserved in order to achieve voice transmission that can be easily interpreted on the receiving end. Data traffic is less susceptible to delay and jitter, however, a need for reliable transfer is a must. Therefore, data traffic is transported using a reliable protocol, such as TCP/ Internet Protocol (IP), which provides reliability of delivery but may suffer throughput performance due to flow control and protocol specific overhead. Our analysis consists of many ten second simulation runs over a network that implements IP over Asynchronous Transfer Mode (ATM). This simulation represents a wide area access device that classifies data by protocol and provides buffering of data to prevent loss under stressful conditions. We examine performance aspects under different output link rates (T1,


ieee computer society workshop on future trends of distributed computing systems | 1997

Nomadic computing and CDMA

David Cypher; Shukri Wakid

This paper shows how reliable data services can be provided in an error prone wireless environment using CDMA and a link layer retransmission protocol. A CDMA encoder using Walsh codes for 1.85 to 1.99 GHz PCS applications is simulated under various error conditions with varying packet size to determine minimum acceptable QOS relative to error templates. The error templates and data rates of 16, 32, and 64 kb/s for mobile terminal units are then used with a link layer retransmission protocol to examine the overhead and the practical feasibility of this protocol suite. Small packet sizes of about 50 to 100 bytes appear to provide promise for commercial implementations. Such implementations are also shown to be able to support upper layer TCP/IP services when TCP timers are disabled.


international conference on network protocols | 1993

The generic flow control (GFC) protocol: a performance assessment

Yoon Chang; David H. Su; Shukri Wakid; Xiomei Qian; Dhadesugoor R. Vaman

Two candidate protocols for the generic flow control of the broadband integrated services digital network (B-ISDN) are currently being considered by the Accredited Standards Committee T1 (ASC T1). One proposal advocates the use of a distributed queue dual bus (DQDB)-like mechanism with a three level priority scheme. The other proposal is based on a slotted ring with control terminal equipment (C-TE), assigning periodic credit authorizations to the transmitting stations. This paper analyzes the performance behavior of the two protocols according to traffic scenarios and boundary conditions defined by the International Telephone and Telegraph Consultative Committee (CCITT) and T1.<<ETX>>

Collaboration


Dive into the Shukri Wakid's collaboration.

Top Co-Authors

Avatar

Fernando Duran

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

David Cypher

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaomei Qian

Stevens Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alan Mink

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Wayne McCoy

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Christopher E. Dabrowski

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

David H. Su

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Dennis D. Steinauer

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge