Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nina Bhatti is active.

Publication


Featured researches published by Nina Bhatti.


IEEE Transactions on Parallel and Distributed Systems | 2002

Performance guarantees for Web server end-systems: a control-theoretical approach

Tarek F. Abdelzaher; Kang G. Shin; Nina Bhatti

The Internet is undergoing substantial changes from a communication and browsing infrastructure to a medium for conducting business and marketing a myriad of services. The World Wide Web provides a uniform and widely-accepted application interface used by these services to reach multitudes of clients. These changes place the Web server at the center of a gradually emerging e-service infrastructure with increasing requirements for service quality and reliability guarantees in an unpredictable and highly-dynamic environment. This paper describes performance control of a Web server using classical feedback control theory. We use feedback control theory to achieve overload protection, performance guarantees, and service differentiation in the presence of load unpredictability. We show that feedback control theory offers a promising analytic foundation for providing service differentiation and performance guarantees. We demonstrate how a general Web server may be modeled for purposes of performance control, present the equivalents of sensors and actuators, formulate a simple feedback loop, describe how it can leverage on real-time scheduling and feedback-control theories to achieve per-class response-time and throughput guarantees, and evaluate the efficacy of the scheme on an experimental testbed using the most popular Web server, Apache. Experimental results indicate that control-theoretic techniques offer a sound way of achieving desired performance in performance-critical Internet applications. Our QoS (Quality-of-Service) management solutions can be implemented either in middleware that is transparent to the server, or as a library called by server code.


IEEE Network | 1999

Web server support for tiered services

Nina Bhatti; Rich Friedrich

The evolving needs of conducting commerce using the Internet requires more than just network quality of service mechanisms for differentiated services. Empirical evidence suggests that overloaded servers can have significant impact on user perceived response times. Furthermore, FIFO scheduling done by servers can eliminate any QoS improvements made by network-differentiated services. Consequently, server QoS is a key component in delivering end to end predictable, stable, and tiered services to end users. This article describes our research and results for WebQoS, an architecture for supporting server QoS. We demonstrate that through classification, admission control, and scheduling, we can support distinct performance levels for different classes of users and maintain predictable performance even when the server is subjected to a client request rate that is several times greater than the servers maximum processing rate.


international world wide web conferences | 1999

Web content adaptation to improve server overload behavior

Tarek F. Abdelzaher; Nina Bhatti

Abstract This paper presents a study of Web content adaptation to improve server overload performance, as well as an implementation of a Web content adaptation software prototype. When the request rate on a Web server increases beyond server capacity, the server becomes overloaded and unresponsive. The TCP listen queue of the servers socket overflows exhibiting a drop-tail behavior. As a result, clients experience service outages. Since clients typically issue multiple requests over the duration of a session with the server, and since requests are dropped indiscriminately, all clients connecting to the server at overload are likely to experience connection failures, even though there may be enough capacity on the server to deliver all responses properly for a subset of clients. In this paper, we propose to resolve the overload problem by adapting delivered content to load conditions to alleviate overload. The premise is that successful delivery of a less resource intensive content under overload is more desirable to clients than connection rejection or failures. The paper suggests the feasibility of content adaptation from three different viewpoints; (a) potential for automating content adaptation with minimal involvement of the content provider, (b) ability to achieve sufficient savings in resource requirements by adapting present-day Web content while preserving adequate information, and (c) feasibility to apply content adaptation technology on the Web with no modification to existing Web servers, browsers or the HTTP protocol.


international workshop on quality of service | 1999

Web server QoS management by adaptive content delivery

Tarek F. Abdelzaher; Nina Bhatti

The Internet is undergoing substantial changes from a communication and browsing infrastructure to a medium for conducting business and selling a myriad of emerging services. The World-Wide Web provides a uniform and widely-accepted application interface used by these services to reach multitudes of clients. These changes place the Web server at the center of a gradually emerging E-service infrastructure with increasing requirements for service quality, reliability, and security guarantees in an unpredictable and highly dynamic environment. Towards that end, we introduce a Web server QoS provisioning architecture for performance differentiation among classes of clients, performance isolation among independent services, and capacity planning to provide QoS guarantees on request rate and delivered bandwidth. We present a new approach to Web server resource management based on Web content adaptation. This approach subsumes traditional admission control-based techniques and enhances server performance by selectively adapting content in accordance with both load conditions and QoS requirements. Our QoS management solutions can be implemented either in middleware transparent to the server or by direct modification of the server software. We present experimental data to illustrate the practicality of our approach.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2005

Understanding performance in coliseum, an immersive videoconferencing system

Harlyn Baker; Nina Bhatti; Donald Tanguay; Irwin Sobel; Dan Gelb; Michael E. Goss; W. Bruce Culbertson; Thomas Malzbender

Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility---participants may move around the shared space---and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, and sessions spanning two continents.Coliseum is a complex software system which pushes commodity computing resources to the limit. We set out to measure the different aspects of resource, network, CPU, memory, and disk usage to uncover the bottlenecks and guide enhancement and control of system performance. Latency is a key component of Quality of Experience for video conferencing. We present how each aspect of the system---cameras, image processing, networking, and display---contributes to total latency. Performance measurement is as complex as the system to which it is applied. We describe several techniques to estimate performance through direct light-weight instrumentation as well as use of realistic end-to-end measures that mimic actual user experience. We describe the various techniques and how they can be used to improve system performance for Coliseum and other network applications. This article summarizes the Coliseum technology and reports on issues related to its performance---its measurement, enhancement, and control.


IEEE Transactions on Computers | 2003

User-level QoS-adaptive resource management in server end-systems

Tarek F. Abdelzaher; Kang G. Shin; Nina Bhatti

Proliferation of QoS-sensitive client-server Internet applications such as high-quality audio, video-on-demand, e-commerce, and commercial Web hosting has generated an impetus to provide performance guarantees. These applications require a guaranteed minimum amount of resources to operate acceptably to the users, thus calling for QoS-provisioning mechanisms. One good place to locate such mechanisms is in server communication subsystems. Server-side communication subsystems manage an increasing number of connection end-points, thus readily controlling important bottleneck resources. We propose, implement, and evaluate a novel communication server architecture that maximizes the aggregate utility of QoS-sensitive connections for a community of clients even in the case of overload. A contribution of this architecture is that it manages QoS from the user space and is transparent to the application. It does not require modifications to the OS kernel, which improves portability and reduces development cost. Results from an experimental evaluation on a microkernel indicate that it achieves end-system overload protection and traffic prioritization, improves insulation between independent clients, adapts to offered load, and enhances aggregate service utility.


acm multimedia | 2003

Computation and performance issues In coliseum: an immersive videoconferencing system

Harlyn Baker; Nina Bhatti; Donald Tanguay; Irwin Sobel; Dan Gelb; Michael E. Goss; John MacCormick; Kei Yuasa; W. Bruce Culbertson; Thomas Malzbender

Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility--participants may move around the shared space--and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, and sessions spanning two continents. This paper summarizes the technology, and reports on issues related to its performance.


International Journal of Imaging Systems and Technology | 2007

Assessing Human Skin Color from Uncalibrated Images

Joanna Marguier; Nina Bhatti; Harlyn Baker; Michael Harville; Sabine Süsstrunk

Images of a scene captured with multiple cameras will have different color values because of variations in color rendering across devices. We present a method to accurately retrieve color information from uncalibrated images taken under uncontrolled lighting conditions with an unknown device and no access to raw data, but with a limited number of reference colors in the scene. The method is used to assess skin tones. A subject is imaged with a calibration target. The target is extracted and its color values are used to compute a color correction transform that is applied to the entire image. We establish that the best mapping is done using a target consisting of skin colored patches representing the whole range of human skin colors. We show that color information extracted from images is well correlated with color data derived from spectral measurements of skin. We also show that skin color can be consistently measured across cameras with different color rendering and resolutions ranging from 0.1 to 4.0 megapixels.


international conference on image processing | 2005

Consistent image-based measurement and classification of skin color

Michael Harville; Harlyn Baker; Nina Bhatti; Sabine Süsstrunk

Little prior image processing work has addressed estimation and classification of skin color in a manner that is independent of camera and illuminant. To this end, we first present new methods for 1) fast, easy-to-use image color correction, with specialization toward skin tones, and 2) fully automated estimation of facial skin color, with robustness to shadows, specularities, and blemishes. Each of these is validated independently against ground truth, and then combined with a classification method that successfully discriminates skin color across a population of people imaged with several different cameras. We also evaluate the effects of image quality and various algorithmic choices on our classification performance. We believe our methods are practical for relatively untrained operators, using inexpensive consumer equipment.


Archive | 2005

Quality of Service – IWQoS 2005

Hermann de Meer; Nina Bhatti

Invited Program.- COPS: Quality of Service vs. Any Service at All.- Beyond Middleware and QoS - Service-Oriented Architectures - Cult or Culture?.- Would Self-organized or Self-managed Networks Lead to Improved QoS?.- Full Papers.- Overlay Networks with Linear Capacity Constraints.- A High-Throughput Overlay Multicast Infrastructure with Network Coding.- On Topological Design of Service Overlay Networks.- On Transport Layer Adaptation in Heterogeneous Wireless Data Networks.- LT-TCP: End-to-End Framework to Improve TCP Performance over Networks with Lossy Channels.- QoS Guarantees in Multimedia CDMA Wireless Systems with Non-precise Network Parameter Estimates.- Analyzing Object Detection Quality Under Probabilistic Coverage in Sensor Networks.- A Self-tuning Fuzzy Control Approach for End-to-End QoS Guarantees in Web Servers.- Calculation of Speech Quality by Aggregating the Impacts of Individual Frame Losses.- Best-Effort Versus Reservations Revisited.- An Advanced QoS Protocol for Real-Time Content over the Internet.- Designing a Predictable Internet Backbone with Valiant Load-Balancing.- Preserving the Independence of Flows in General Topologies Using Turn-Prohibition.- Supporting Differentiated QoS in MPLS Networks.- Avoiding Transient Loops Through Interface-Specific Forwarding.- Analysis of Stochastic Service Guarantees in Communication Networks: A Server Model.- Preemptive Packet-Mode Scheduling to Improve TCP Performance.- Edge-Based Differentiated Services.- Processor Sharing Flows in the Internet.- A Practical Method for the Efficient Resolution of Congestion in an On-path Reduced-State Signalling Environment.- Case Study in Assessing Subjective QoS of a Mobile Multimedia Web Service in a Real Multi-access Network.- WXCP: Explicit Congestion Control for Wireless Multi-hop Networks.- A Non-homogeneous QBD Approach for the Admission and GoS Control in a Multiservice WCDMA System.- Short Papers.- Quality of Service Authentication, Authorization and Accounting.- Preliminary Results Towards Building a Highly Granular QoS Controller.- Concept of Admission Control in Packet Switching Networks Based on Tentative Accommodation of Incoming Flows.- Improving Uplink QoS of Wifi Hotspots.- Resilient State Management in Large Scale Networks.- Performance Analysis of Wireless Scheduling with ARQ in Fast Fading Channels.- Privacy and Reliability by Dispersive Routing.- Distributed Online LSP Merging Algorithms for MPLS-TE.- Implicit Flow QoS Signaling Using Semantic-Rich Context Tags.- The Impact of QoS - Where Industry Meets Academia.- Using IP as Transport Technology in Third Generation and Beyond Radio Access Networks.- Closing the Gap Between Industry, Academia and Users: Is There a Need for QoS in Wireless Systems ?.- Why QoS Will Be Needed in Metro Ethernets.- Research Issues in QoS Provisioning for Personal Networks.- RSVP Standards Today and the Path Towards a Generic Messenger.- QoS in Hybrid Networks - An Operators Perspective.- QoS for Aggregated Flows in VPNs.- Supporting Mission-Critical Applications over Multi-service Networks.

Collaboration


Dive into the Nina Bhatti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sabine Süsstrunk

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michele Covell

Interval Research Corporation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joanna Marguier

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge