Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Rand is active.

Publication


Featured researches published by William Rand.


International Journal of Geographical Information Science | 2005

Path dependence and the validation of agent-based spatial models of land use

Daniel G. Brown; Scott E. Page; Rick L. Riolo; Moira Zellner; William Rand

In this paper, we identify two distinct notions of accuracy of land‐use models and highlight a tension between them. A model can have predictive accuracy: its predicted land‐use pattern can be highly correlated with the actual land‐use pattern. A model can also have process accuracy: the process by which locations or land‐use patterns are determined can be consistent with real world processes. To balance these two potentially conflicting motivations, we introduce the concept of the invariant region, i.e., the area where land‐use type is almost certain, and thus path independent; and the variant region, i.e., the area where land use depends on a particular series of events, and is thus path dependent. We demonstrate our methods using an agent‐based land‐use model and using multi‐temporal land‐use data collected for Washtenaw County, Michigan, USA. The results indicate that, using the methods we describe, researchers can improve their ability to communicate how well their model performs, the situations or instances in which it does not perform well, and the cases in which it is relatively unlikely to predict well because of either path dependence or stochastic uncertainty.


Management Science | 2013

Media, Aggregators and the Link Economy: Strategic Hyperlink Formation in Content Networks

Chrysanthos Dellarocas; Zsolt Katona; William Rand

A key property of the World Wide Web is the possibility for firms to place virtually costless links to third-party content as a substitute or complement to their own content. This ability to hyperlink has enabled new types of players, such as search engines and content aggregators, to successfully enter content ecosystems, attracting traffic and revenues by hosting links to the content of others. This, in turn, has sparked a heated controversy between content producers and aggregators regarding the legitimacy and social costs/benefits of uninhibited free linking. This work is the first to model the implications of interrelated and strategic hyper-linking and content investments. Our results provide a nuanced view of the much-touted Olink economyO, highlighting both the beneficial consequences and the drawbacks of free hyperlinks for content producers and consumers. We show that content sites can reduce competition and improve profits by forming links to each other; in such networks one site makes high investments in content and other sites link to it. Interestingly, competitive dynamics often preclude the formation of link networks, even in settings where they would improve everyones profits. Furthermore, such networks improve economic efficiency only when all members have similar abilities to produce content; otherwise the less capable nodes can free-ride on the content of the more capable nodes, reducing profits for the capable nodes as well as the average content quality available to consumers. Within these networks, aggregators have both positive and negative effects. By making it easier for consumers to access good quality content they increase the appeal of the entire content ecosystem relative to the alternatives. To the extent that this increases the total traffic flowing into the content ecosystem, aggregators can help increase the profits of the highest quality content sites. At the same time, however, the market entry of aggregators takes away some of the revenue that would otherwise go to pure content sites. Finally, by placing links to only a subset of available content, aggregators further increase competitive pressure on content sites. Interestingly, this can increase the likelihood that such sites will then attempt to alleviate the competitive pressure by forming link networks.


genetic and evolutionary computation conference | 2010

Evolving viral marketing strategies

Forrest Stonedahl; William Rand; Uri Wilensky

One method of viral marketing involves seeding certain consumers within a population to encourage faster adoption of the product throughout the entire population. However, determining how many and which consumers within a particular social network should be seeded to maximize adoption is challenging. We define a strategy space for consumer seeding by weighting a combination of network characteristics such as average path length, clustering coefficient, and degree. We measure strategy effectiveness by simulating adoption on a Bass-like agent-based model, with five different social network structures: four classic theoretical models (random, lattice, small-world, and preferential attachment) and one empirical (extracted from Twitter friendship data). To discover good seeding strategies, we have developed a new tool, called BehaviorSearch, which uses genetic algorithms to search through the parameter-space of agent-based models. This evolutionary search also provides insight into the interaction between strategies and network structure. Our results show that one simple strategy (ranking by node degree) is near-optimal for the four theoretical networks, but that a more nuanced strategy performs significantly better on the empirical Twitter-based network. We also find a correlation between the optimal seeding budget for a network, and the inequality of the degree distribution.


Journal of Marketing | 2016

Brand Buzz in the Echoverse

Kelly Hewett; William Rand; Roland T. Rust; Harald J. van Heerde

Social media sites have created a reverberating “echoverse” for brand communication, forming complex feedback loops (“echoes”) between the “universe” of corporate communications, news media, and user-generated social media. To understand these feedback loops, the authors process longitudinal, unstructured data using computational linguistics techniques and analyze them using econometric methods. By assembling one of the most comprehensive data sets in the brand communications literature with corporate communications, news stories, social media, and business outcomes, the authors document the echoverse (i.e., feedback loops between all of these sources). Furthermore, the echoverse has changed as online word of mouth has become prevalent. Over time, online word of mouth has fallen into a negativity spiral, with negative messages leading to greater volume, and firms are adjusting their communications strategies in response. The nature of brand communications has been transformed by online technology as corporate communications move increasingly from one to many (e.g., advertising) to one to one (e.g., Twitter) while consumer word of mouth moves from one to one (e.g., conversations) to one to many (e.g., social media). The results indicate that companies benefit from using social media (e.g., Twitter) for personalized customer responses, although there is still a role for traditional brand communications (e.g., press releases, advertising). The evolving echoverse requires managers to rethink brand communication strategies, with online communications becoming increasingly central.


Journal of Marketing Research | 2013

Improving Prelaunch Diffusion Forecasts: Using Synthetic Networks as Simulated Priors

Michael Trusov; William Rand; Yogesh V. Joshi

Although the role of social networks and consumer interactions in new product diffusion is widely acknowledged, such networks and interactions are often unobservable to researchers. What may be observable, instead, are aggregate diffusion patterns for past products adopted within a particular social network. The authors propose an approach for identifying systematic conditions that are stable across diffusions and thus are “transferrable” to new product introductions within a given network. Using Facebook applications data, the authors show that incorporation of such systematic conditions improves prelaunch forecasts. This research bridges the gap between the disciplines of Bayesian statistics and agent-based modeling by demonstrating how researchers can use stochastic relationships simulated within complex systems as meaningful inputs for Bayesian inference models.


genetic and evolutionary computation conference | 2005

Measurements for understanding the behavior of the genetic algorithm in dynamic environments: a case study using the Shaky Ladder Hyperplane-Defined Functions

William Rand; Rick L. Riolo

We describe a set of measures to examine the behavior of the Genetic Algorithm (GA) in dynamic environments. We describe how to use both average and best measures to look at performance, satisficability, robustness, and diversity. We use these measures to examine GA behavior with a recently devised dynamic test suite, the Shaky Ladder Hyperplane-Defined Functions (sl-hdfs). This test suite can generate random problems with similar levels of difficulty and provides a platform allowing systematic controlled observations of the GA in dynamic environments. We examine the results of these measures in two different versions of the sl-hdfs, one static and one regularly-changing. We provide explanations for the observations in these two different environments, and give suggestions as to future work.


international conference on social computing | 2013

Predictability of User Behavior in Social Media: Bottom-Up v. Top-Down Modeling

David Darmon; Jared Sylvester; Michelle Girvan; William Rand

Recent work has attempted to capture the behavior of users on social media by modeling them as computational units processing information. We propose to extend this perspective by explicitly examining the predictive power of such a view. We consider a network of fifteen thousand users on Twitter over a seven week period. To evaluate the predictability of the users, we apply two contrasting modeling paradigms: computational mechanics and echo state networks. Computational mechanics seeks to construct the simplest model with the maximal predictive capability, while echo state networks relax from very complicated dynamics until predictive capability is reached. We demonstrate that the behavior of users on Twitter can be well-modeled as processes with self-feedback and compare the performance of models built with both the statistical and neural paradigms.


genetic and evolutionary computation conference | 2006

The effect of crossover on the behavior of the GA in dynamic environments: a case study using the shaky ladder hyperplane-defined functions

William Rand; Rick L. Riolo; John H. Holland

One argument as to why the hyperplane-defined functions (hdfs) are a good testbed for the genetic algorithm (GA) is that the hdfs are built in the same way that the GA works. In this paper we test that hypothesis in a new setting by exploring the GA on a subset of the hdfs which are dynamic---the shaky ladder hyperplane-defined functions (sl-hdfs). In doing so we gain insight into how the GA makes use of crossover during its traversal of the sl-hdf search space. We begin this paper by explaining the sl-hdfs. We then conduct a series of experiments with various crossover rates and various rates of environmental change. Our results show that the GA performs better with than without crossover in dynamic environments. Though these results have been shown on some static functions in the past, they are re-confirmed and expanded here for a new type of function (the hdf) and a new type of environment (dynamic environments). Moreover we show that crossover is even more beneficial in dynamic environments than it is in static environments. We discuss how these results can be used to develop a richer knowledge about the use of building blocks by the GA.


Environment and Planning B-planning & Design | 2010

The Problem with Zoning: Nonlinear Effects of Interactions between Location Preferences and Externalities on Land Use and Utility

Moira Zellner; Rick L. Riolo; William Rand; Daniel G. Brown; Scott E. Page; Luis E. Fernandez

An important debate in the literature on exurban sprawl is whether low-density development results from residential demand, as operationalized by developers, or from exclusionary zoning policies. Central to this debate is the purpose of zoning, which could alternatively be a mechanism to increase the utility of residents by separating land uses and reducing spillover effects of development, or an obstacle to market mechanisms that would otherwise allow the realization of residential preferences. To shed light on this debate, we developed an agent-based model of land-use change to study how the combined effects of zoning-enforcement levels, density preferences, preference heterogeneity, and negative externalities from development affect exurban development and the utility of residents. Our computational experiments show that sprawl is not inevitable, even when most of the population prefers low densities. The presence of negative externalities consistently encourage sprawl while decreasing average utility and flattening the utility distribution. Zoning can reduce sprawl by concentrating development in specific areas, but in doing so decreases average utility and increases inequality. Zoning does not internalize externalities; instead, it contains externalities in areas of different development density so that residents bear the burden of the external effects of the density they prefer. Effects vary with residential preference distributions and levels of zoning enforcement. These initial investigations can help inform policy makers about the conditions under which zoning enforcement is preferable to free-market development and vice versa. Future work will focus on the environmental impacts of different settlement patterns and the role land-use and market-based policies play in this relationship.


Lecture Notes in Computer Science | 2005

Shaky ladders, hyperplane-defined functions and genetic algorithms: systematic controlled observation in dynamic environments

William Rand; Rick L. Riolo

Though recently there has been interest in examining genetic algorithms (GAs) in dynamic environments, work still needs to be done in investigating the fundamental behavior of these algorithms in changing environments. When researching the GA in static environments, it has been useful to use test suites of functions that are designed for the GA so that the performance can be observed under systematic controlled conditions. One example of these suites is the hyperplane-defined functions (hdfs) designed by Holland [1]. We have created an extension of these functions, specifically designed for dynamic environments, which we call the shaky ladder functions. In this paper, we examine the qualities of this suite that facilitate its use in examining the GA in dynamic environments, describe the construction of these functions and present some preliminary results of a GA operating on these functions.

Collaboration


Dive into the William Rand's collaboration.

Top Co-Authors

Avatar

Uri Wilensky

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Moira Zellner

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge