David B. Ingham
Newcastle University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David B. Ingham.
international world wide web conferences | 1996
David B. Ingham; Steve J. Caughey; Mark Cameron Little
Abstract One of most serious problems plaguing the World Wide Web today is that of broken hypertext links, which are a major annoyance to browsing users and also a cause of tarnished reputation and possible loss of opportunity for information providers. The root of the problem lies in the current Web architectures lack of support for referential integrity. This paper presents a model for the provision of referential integrity for Web resources which supports resource migration and tolerates site and communication failures. The approach is object-oriented, highly flexible, completely distributed, and does not require any global administration. An attractive feature of our design is the provision of a lightweight mechanism which provides referential integrity, and which may be customised on a per resource basis to provide increased fault-tolerance and performance. Our system follows an evolutionary approach, supporting parallel operation with the existing Web, allowing users to gain the additional benefits of referential integrity while allowing continued access through trusted software components.
IEEE Internet Computing | 2000
David B. Ingham; Santosh K. Shrivastava; Fabio Panzieri
The majority of todays millions of Web sites offer read-only access to relatively small amounts of infrequently changing information. The load on these sites is usually small, and services can often be hosted as background tasks on general-purpose workstations. Concern for the quality of service (QoS) presented to users at these sites is generally not primary. Conversely, a much smaller number of sites are very popular; they support heavy loads and must meet user expectations regarding QoS to maintain their popularity. We discuss issues involved in supporting high-volume, high-reliability Web services. We begin by surveying the diverse technical challenges and constraints posed in this environment, followed by currently used hardware and network based approaches to meeting the scalability requirements and software-implemented techniques to addressing fault tolerance. However, these approaches do not scale well, so we discuss possible alternatives.
international world wide web conferences | 1997
David B. Ingham; Steve J. Caughey; Mark Cameron Little
This paper focuses on the management aspects of Web service provision. We argue that support for manageability has to be considered at the design stage if services are to be capable of delivering high levels of quality of service for their users. Examples of the problems caused by lack of manageability include maintenance operations that necessitate service downtime, or difficulties in ensuring consistency of information. We categorise management issues into those concerning a site as a whole and those pertaining to individual services. Our approach to site management supports the arbitrary distribution of services to machines, allowing the optimum cost/performance configuration to be selected. Services can be easily migrated between machines, resulting in sites that scale, both in terms of the number of services and the number of users. Service management issues may be generalised as supporting evolution, for example, supporting changes to the functionality, the presentation logic, and the overall look and feel of a service. Our approach, based on the separation of functionality and presentation, allows such changes to be performed on-line and ensures that updates are reflected consistency across the various pages of a service, or across services. This approach also facilitates the development of services that utilise dynamic content for service customisations, such as tailoring a service to match the profile of users. Furthermore, all management operations are available through Web-based interfaces, making them accessible to a broad range of users, not only specialist system administrators.
international world wide web conferences | 1997
Steve J. Caughey; David B. Ingham; Mark Cameron Little
Abstract Caching plays a vital role in the performance of any large-scale distributed system and, as the variety and number of Web applications grows, is becoming an increasingly important research topic within the Web community. Existing caching mechanisms are largely transparent to their users and cater for resources which are primarily read-only, offering little support for customisable or complex caching strategies. In this paper we examine the deficiencies in these mechanisms with regard to applications with requirements for shared access to data where clients may require a variety of consistency guarantees. We present “open” caching within an object-oriented framework, an approach to solving these problems which, instead of offering caching transparency makes the caching mechanism highly visible allowing great flexibility in caching choices. Our implementation is built upon the W3Objects infrastructure and allows clients to make caching decisions for individual resources with minimal impact upon other resources which do not support our mechanisms.
international world wide web conferences | 1997
Mark Cameron Little; Santosh K. Shrivastava; Steve J. Caughey; David B. Ingham
Abstract The Web frequently suffers from failures which affect the performance and consistency of applications run over it. An important fault-tolerance technique is the use of atomic actions (atomic transactions) for controlling operations on services. Atomic actions guarantee the consistency of applications despite concurrent accesses and failures. Techniques for implementing transactions on distributed objects are well-known: in order to become “transaction aware”, an object requires facilities for concurrency control, persistence, and the ability to participate in a commit protocol. While it is possible to make server-side applications transactional, browsers typically do not possess such facilities, a situation which is likely to persist for the foreseeable future. Therefore, the browser will not normally be able to take part in transactional applications. The paper presents a design and implementation of a scheme that does permit non-transactional browsers to participate in transactional applications, thereby providing much needed end-to-end transactional guarantees.
Advances in Computers | 1999
Steve J. Caughey; Daniel Hagimont; David B. Ingham
Internet applications are becoming truly distributed as intelligence moves to the browser and services are being decentralised in order to improve their performance and availability. As a consequence distributed, objectoriented technology in the form of language-level support, e.g., Java, or middleware platforms, e.g., CORBA, is being increasingly deployed. Underpinning this technology are many years of research, such as that undertaken by the members of the Broadcast Working Group, into the problems of distribution in large-scale systems. In this chapter we outline some of the constraints of large scale systems in general and the Internet in particular. We then present two case studies that illustrate the application of distributed, object-oriented technology developed within the project. The first of these is the W3 Objects project in which the technology is applied to the Web, and the second in which it is applied to Computer Supported Collaborative Work Internet applications.
Advances in Computers | 1999
Mark Cameron Little; Stuart M. Wheater; David B. Ingham; C. Richard Snow; Harry Whitfield; Santosh K. Shrivastava
Prior to 1994, student registration at Newcastle University involved students being registered in a single place, where they would present a form which had previously been filled in by the student and their department. After registration this information was then transferred to a computerised format. The University decided that the entire registration process was to be computerised for the Autumn of 1994, with the admission and registration being carried out at the departments of the students. Such a system has a very high availability requirement: admissions tutors and secretaries must be able to access and create student records (particularly at the start of a new academic year when new students arrive). The Arjuna distributed system has been under development in the Department of Computing Science for many years. Arjunas design aims are to provide tools to assist in the construction of fault-tolerant, highly available distributed applications using atomic actions (atomic transactions) and replication. Arjuna offers the right set of facilities for this application, and its deployment would enable the University to exploit the existing campus network and workstation clusters, thereby obviating the need for any specialised fault tolerant hardware.
international symposium on distributed objects and applications | 1999
David B. Ingham; Owen Rees; Andy Norman
Electronic commerce on the Internet is evolving from simple customer-to-business interactions, like online shopping, to complex business-to-business extranet applications. These applications typically require back-office processing in two or more organisations. CORBA provides abstractions that make it a good technology for building such applications. Transactions are a well known technique for ensuring the overall consistency of system state in the presence of the concurrent access and occasional failure that are likely in extranet applications. The use of CORBA transactions for supporting extranet applications is complicated by the use of firewalls. Conventional firewall technology operates by restricting access based on host address and port number, and does not suit CORBA, which abstracts away from these concepts. The paper describes the issues involved and shows how they can be addressed using an advanced CORBA object gateway.
Proceedings 1999 IEEE Workshop on Internet Applications (Cat. No.PR00197) | 1999
David B. Ingham; Steve J. Caughey; Paul Watson; Stephen Halsey
In traditional commerce, brokers act as middlemen between customers and providers, aggregating, repackaging and adding value to products, services or information. In todays World Wide Web, such services are generally lacking, with the result that individuals are forced to manually discover, collate and analyse information to meet their needs. This paper begins by presenting the design and implementation of a travel-planning brokering system which provides a combined travel timetable service using information gleaned from existing Web services. The aim of this prototype was to gain experience as to the needs of Internet brokering systems in general. The lessons learned from the exercise have led to the design of a generic brokering framework, known as Metabroker. The framework provides commonly required functionality and support for popular communication protocols and data formats. Specialist brokers are then created by populating the base framework with the necessary business logic, in the form of workflows, to support the area of speciality of the broker. Our design integrates distributed object, metadata, workflow and object database technologies.
Advances in Computers | 1999
David B. Ingham; Fabio Panzieri; Santosh K. Shrivastava