Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Alexander Ford is active.

Publication


Featured researches published by Daniel Alexander Ford.


International Journal of Health Geographics | 2006

An extensible spatial and temporal epidemiological modelling system

Daniel Alexander Ford; James H. Kaufman; Iris Eiron

BackgroundThis paper describes the Spatiotemporal Epidemiological Modeller (STEM) which is an extensible software system and framework for modelling the spatial and temporal progression of multiple diseases affecting multiple populations in geographically distributed locations. STEM is an experiment in developing a software system that can model complex epidemiological scenarios while also being extensible by the research community. The ultimate goal of STEM is to provide a common modelling platform powerful enough to be sufficient for all modelling scenarios and extensible in a way that allows different researchers to combine their efforts in developing exceptionally good models.ResultsSTEM is a powerful modelling system that allows researchers to model scenarios with unmixed populations that are not uniformly distributed and in which multiple populations exist that are being infected with multiple diseases. Its underlying representational framework, a graph, and its software architecture allow the system to be extended by incorporating software components developed by different researchers.ConclusionThis approach taken in the design of STEM creates a powerful platform for epidemiological research collaboration. Future versions of the system will make such collaborative efforts easy and common.


international world wide web conferences | 1998

Efficient profile matching for large scale Webcasting

Qi Lu; Matthias Eichstaedt; Daniel Alexander Ford

This paper presents an efficient method of matching diverse data against user profiles in large-scale Webcasting systems. Its design and implementation are described in the context of the Grand Central Station (GCS) project at IBM Almaden Research Center. Initial performance evaluation indicates the ability of GCS profile matching to scale up and achieve strong performance via dynamic adaptation.


hawaii international conference on system sciences | 1999

Collaborative Web crawling: information gathering/processing over Internet

Shang-Hua Teng; Qi Lu; Matthias Eichstaedt; Daniel Alexander Ford; Tobin J. Lehman

The main objective of the IBM Grand Central Station (GCS) project is to gather all types of information in any format (text, data, image, graphics, audio, video) from cyberspace, to process/index/summarize the information, and to push the right information to the right people. Because of the very large scale of cyberspace, parallel processing in both crawling/gathering and information processing is indispensable. We present a scalable method for collaborative Web crawling and information processing. The method includes an automatic cyberspace partitioner which is designed to balance and re-balance the load dynamically among processors. It can be used when all Web crawlers are located on a tightly coupled high-performance system as well as when they are scattered in a distributed environment. We implemented these algorithms in Java.


PLOS ONE | 2009

The Cost of Simplifying Air Travel When Modeling Disease Spread

Justin Lessler; James H. Kaufman; Daniel Alexander Ford; Judith V. Douglas

Background Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Methodology/Principal Findings Using U.S. ticket data from 2007, we compared a simplified “pipe” model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a “gravity” model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. Conclusions/Significance If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.


international conference on data engineering | 1996

A log-stuctured organization for tertiary storage

Daniel Alexander Ford; Jussi Myllymaki

We present the design of a log-structured tertiary storage (LTS) system. The advantage of this approach is that it allows the system to hide the details of jukebox robotics and media characteristics behind a uniform, random-access, block-oriented interface. It also allows the system to avoid media mount operations for writes, giving a write performance similar to that of secondary storage.


Electronic Commerce Research | 2005

The Social Contract Core

James H. Kaufman; Stefan Edlund; Daniel Alexander Ford; Calvin Powers

The information age has brought with it the promise of unprecedented economic growth based on the efficiencies made possible by new technology. This same greater efficiency has left society with less and less time to adapt to technological progress. Perhaps the greatest cost of this progress is the threat to privacy we all face from unconstrained exchange of our personal information. In response to this threat, the World Wide Web Consortium has introduced the “Platform for Privacy Preferences” (P3P) to allow sites to express policies in machine-readable form and to expose these policies to site visitors [Cranor et al., 8]. However, today P3P does not protect the privacy of individuals, nor does its implementation empower communities or groups to negotiate and establish standards of behavior. It is only through such negotiation or feedback that new social contracts can evolve. We propose a privacy architecture, the Social Contract Core (SCC), designed to use technology to facilitate this feedback and so speed the establishment of new “Social Contracts” needed to protect private data. The goal of SCC is to empower communities, speed the “socialization” of new technology, and encourage the rapid access to, and exchange of, information. Addressing these issues is essential, we feel, to both liberty and economic prosperity in the information age [Kaufman et al., 17].


adaptive agents and multi-agents systems | 2001

Tempus fight and the need for an e-social contract

James H. Kaufman; Joann Ruvolo; Daniel Alexander Ford

For autonomous agents to achieve their full potential they require access to detailed private information. The time is rapidly approaching when we can build systems to gather this information and monitor all aspects of an individuals life. In this paper we describe Tempus Fugit (Time Flies), an attempt to create just such a system. The reality of this technology has enormous social implications and, misused, it creates direct threats to liberty. We further describe an “e-Social Contract”, a design philosophy developed to safeguard against these threats. It is the foundation of the design philosophy behind Tempus Fugit and should be considered in the development of any agent technology.


Archive | 2011

Modeling in Space and Time

Daniel Alexander Ford; James H. Kaufman; Yossi Mesika

This chapter describes the Spatiotemporal Epidemiological Modeler (STEM), now being developed as an open source computer software system for defining and visualizing simulations of the spread of infectious disease in space and time. Part of the Eclipse Technology Project, http://www.eclipse.org/ stem, STEM is designed to offer the research community the power and extensibility to develop, validate, and share models on a common collaborative platform. Its innovations include a common representational framework that supports users in creating and configuring the components that constitute a model. This chapter defines modeling terms (canonical graph, decorators, etc.) and key concepts (e.g., labels, disease model computations) are discussed. Figures illustrate the types of visualizations STEM provides, including geographical views via GIS and Google Earth™ and report generated graphics.


parallel computing | 1998

Redundant arrays of independent libraries (RAIL): the StarFish tertiary storage system

Daniel Alexander Ford; Robert J. T. Morris; Alan E. Bell

Abstract Increased computer networking has sparked a resurgence of the ‘on-line’ revolution of the 1970s, making ever larger amounts of data available on a world wide basis and placing greater demands on the performance and availability of tertiary storage systems. In this paper, we argue for a new approach to tertiary storage system architecture that is obtained by coupling multiple small and inexpensive ‘building block’ libraries (or jukeboxes) together to create larger tertiary storage systems. We call the resulting system a RAIL and show that it has performance and availability characteristics superior to conventional tertiary storage systems, for almost the same dollar/megabyte cost. A RAIL system is the tertiary storage equivalent of a fixed magnetic disk RAID storage system, but with several additional features that enable the ideas of data striping and redundancy to function efficiently on dismountable media and robotic media mounting systems. We present the architecture of such a system called StarFish I and describe the implementation of a prototype. We also introduce the idea of creating a log-structured library array (LSLA) on top of a RAIL architecture (StarFish II) and show how it can have write performance equivalent to that of secondary storage, and improved read performance along with other advantages such as easier compression and the elimination of the 4 × RAID/RAIL write penalty.


Archive | 2004

Extending Groupware for OLAP

Stefan Edlund; Daniel Alexander Ford; Vikas Krishna; Sunitha Kambhampati

While applications built on top of groupware systems are capable of managing mundane tasks such as scheduling and email, they are not optimised for certain kinds of applications, for instance generating aggregated summaries of scheduled activities. Groupware systems are primarily designed with online transaction processing in mind, and are highly focused on maximizing throughput when clients concurrently access and manipulate information on a shared store. In this paper, we give an overview and discuss some of the implementation details of a system that transforms groupware Calendaring & Scheduling (C&S) data into a relational OLAP database optimised for these kinds of analytical applications. We also describe the structure of the XML documents that carry incremental update information between the source groupware system and the relational database, and show how the generic structure of the documents enables us to extend the infrastructure to other groupware systems as well.

Collaboration


Dive into the Daniel Alexander Ford's collaboration.

Researchain Logo
Decentralizing Knowledge