Louis Anthony Cox
University of Colorado Denver
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Louis Anthony Cox.
Journal of Heuristics | 2000
Louis Anthony Cox; Jennifer Ryan Sanchez
This paper presents a new heuristic algorithm for designing least-cost telecommunications networks to carry cell site traffic to wireless switches while meeting survivability, capacity, and technical compatibility constraints. This requires solving the following combinatorial optimization problems simultaneously: (1) Select a least-cost subset of locations (network nodes) as hubs where traffic is to be aggregated and switched, and choose the type of hub (high-capacity DS3 vs. lower-capacity DS1 hub) for each location; (2) Optimally assign traffic from other nodes to these hubs, so that the traffic entering the network at these nodes is routed to the assigned hubs while respecting capacity constraints on the links and routing-diversity constraints on the hubs to assure survivability; and (3) Optimally choose the types of links to be used in interconnecting the nodes and hubs based on the capacities and costs associated with each link type. Each of these optimization problems must be solved while accounting for its impacts on the other two. This paper introduces a short term Tabu Search (STTS) meta-heuristic, with embedded knapsack and network flow sub-problems, that has proved highly effective in designing such “backhaul networks” for carrying personal communications services (PCS) traffic. It solves problems that are challenging for conventional branch-and-bound solvers in minutes instead of hours and finds lower-cost solutions. Applied to real-world network design problems, the heuristic has successfully identified designs that save over 20% compared to the best previously known designs.
Critical Reviews in Toxicology | 2013
Louis Anthony Cox; Douglas A. Popken; D. Wayne Berman
Abstract Many recent health risk assessments have noted that adverse health outcomes are significantly statistically associated with proximity to suspected sources of health hazard, such as manufacturing plants or point sources of air pollution. Using geographic proximity to sources as surrogates for exposure to (possibly unknown) releases, spatial ecological studies have identified potential adverse health effects based on significant regression coefficients between risk rates and distances from sources in multivariate statistical risk models. Although this procedure has been fruitful in identifying exposure–response associations, it is not always clear whether the resulting regression coefficients have valid causal interpretations. Spurious spatial regression and other threats to valid causal inference may undermine practical efforts to causally link health effects to geographic sources, even when there are clear statistical associations between them. This paper demonstrates the methodological problems by examining statistical associations and regression coefficients between spatially distributed exposure and response variables in a realistic data set for California. We find that distance from “nonsense” sources (such as arbitrary points or lines) are highly statistically significant predictors of cause-specific risks, such as traffic fatalities and incidence of Kaposi’s sarcoma. However, the signs of such associations typically depend on the distance scale chosen. This is consistent with theoretical analyses showing that random spatial trends (which tend to fluctuate in sign), rather than true causal relations, can create statistically significant regression coefficients: spatial location itself becomes a confounder for spatially distributed exposure and response variables. Hence, extreme caution and careful application of spatial statistical methods are warranted before interpreting proximity-based exposure–response relations as evidence of a possible or probable causal relation.
Annals of Epidemiology | 2015
Louis Anthony Cox; Douglas A. Popken
PURPOSEnBetween 2000 and 2010, air pollutant levels in counties throughout the United States changed significantly, with fine particulate matter (PM2.5) declining over 30% in some counties and ozone (O3) exhibiting large variations from year to year. This history provides an opportunity to compare county-level changes in average annual ambient pollutant levels to corresponding changes in all-cause (AC) and cardiovascular disease (CVD) mortality rates over the course of a decade. Past studies have demonstrated associations and subsequently either interpreted associations causally or relied on subjective judgments to infer causation. This article applies more quantitative methods to assess causality.nnnMETHODSnThis article examines data from these natural experiments of changing pollutant levels for 483 counties in the 15 most populated US states using quantitative methods for causal hypothesis testing, such as conditional independence and Granger causality tests. We assessed whether changes in historical pollution levels helped to predict and explain changes in CVD and AC mortality rates.nnnRESULTSnA causal relation between pollutant concentrations and AC or CVD mortality rates cannot be inferred from these historical data, although a statistical association between them is well supported. There were no significant positive associations between changes in PM2.5 or O3 levels and corresponding changes in disease mortality rates between 2000 and 2010, nor for shorter time intervals of 1 to 3xa0years.nnnCONCLUSIONSnThese findings suggest that predicted substantial human longevity benefits resulting from reducing PM2.5 and O3 may not occur or may be smaller than previously estimated. Our results highlight the potential for heterogeneity in air pollution health effects across regions, and the high potential value of accountability research comparing model-based predictions of health benefits from reducing air pollutants to historical records of what actually occurred.
Risk Analysis | 2014
Elisabeth Paté-Cornell; Louis Anthony Cox
The three classic pillars of risk analysis are risk assessment (how big is the risk and how sure can we be?), risk management (what shall we do about it?), and risk communication (what shall we say about it, to whom, when, and how?). We propose two complements as important parts of these three bases: risk attribution (who or what addressable conditions actually caused an accident or loss?) and learning from experience about risk reduction (what works, and how well?). Failures in complex systems usually evoke blame, often with insufficient attention to root causes of failure, including some aspects of the situation, design decisions, or social norms and culture. Focusing on blame, however, can inhibit effective learning, instead eliciting excuses to deflect attention and perceived culpability. Productive understanding of what went wrong, and how to do better, thus requires moving past recrimination and excuses. This article identifies common blame-shifting lame excuses for poor risk management. These generally contribute little to effective improvements and may leave real risks and preventable causes unaddressed. We propose principles from risk and decision sciences and organizational design to improve results. These start with organizational leadership. More specifically, they include: deliberate testing and learning-especially from near-misses and accident precursors; careful causal analysis of accidents; risk quantification; candid expression of uncertainties about costs and benefits of risk-reduction options; optimization of tradeoffs between gathering additional information and immediate action; promotion of safety culture; and mindful allocation of people, responsibilities, and resources to reduce risks. We propose that these principles provide sound foundations for improving successful risk management.
The Journal of Infectious Diseases | 2005
Louis Anthony Cox; Dennis D. Copeland; Michael Vaughn
Correspondence Table 1. Mean duration versus resistance and foreign-travel status of cases. Foreign travel Resistance to ciprofloxacin Yes No Yes 8.1 days (n p 29) 7.6 days (n p 48) No 6.9 days (n p 41) 6.9 days (n p 529) NOTE. The data do not include patients not reporting diarrhea, those with continuing diarrhea, those unable to estimate duration of diarrhea, or those not responding to the foreign-travel question. To the Editor—In their article Prolonged Diarrhea Due to Ciprofloxacin-Resistant Campylobacter Infection [1], Nelson et al. comment, Although human infections with ciprofloxacin-resistant Campylobac-ter have become increasingly common, the human health consequences of such infections are not well described (p. 1150). They then present analyses indicating that persons with ciprofloxacin-resistant infection had a longer mean duration of diarrhea than did the persons with ciprofloxacin-susceptible infection (P p .01); this effect was independent of foreign travel (p. 1150). They conclude, Persons with ciprofloxacin-resistant Campylo-bacter infection have a longer duration of diarrhea than do persons with ciprofloxa-cin-susceptible Campylobacter infection. Additional efforts are needed to preserve the efficacy of fluoroquinolones (p. 1150). We have a number of concerns with these analyses. The following comments address some of them. Reexamining the raw data raises questions about the validity and general applicability of the stated conclusions and interpretations. Table 1 shows the mean number of illness-days (and no. of cases) of Campylobacter infection that (1) are acquired via foreign travel versus domestically and (2) are ciprofloxacin resistant versus ciprofloxacin susceptible. This table shows the unadjusted data, without adjustment for the additional subset-selection and statistical-modeling steps used by Nelson et al. Although foreign travel is strongly associated with resistance to ciprofloxacin, and also with longer mean duration of diarrhea, resistance to ciprofloxacin is clearly not associated with longer mean duration of diarrhea among cases of domestically acquired campylobacteriosis. Any analysis that begins with crude data showing no apparent effect but that ends by concluding that there is a significant effect for some subsets of subjects must be especially diligent in the reporting of model diagnostics, as well as in correcting for potential model-selection bias, variable-selection and-coding biases, data subset– selection bias, and multiple-testing biases , all of which can threaten study validity by producing an excessive number of false-positives [2, 3]. Nelson et al. do not report such recommended diag-nostics and corrections, which leaves open the possibility that the …
Dose-response | 2005
Louis Anthony Cox
Dose-response data for many chemical carcinogens exhibit multiple apparent concentration thresholds. A relatively small increase in exposure concentration near such a threshold disproportionately increases incidence of a specific tumor type. Yet, many common mathematical models of carcinogenesis do not predict such threshold-like behavior when model parameters (e.g., describing cell transition rates) increase smoothly with dose, as often seems biologically plausible. For example, commonly used forms of both the traditional Armitage-Doll and multistage (MS) models of carcinogenesis and the Moolgavkar-Venzon-Knudson (MVK) two-stage stochastic model of carcinogenesis typically yield smooth dose-response curves without sudden jumps or thresholds when exposure is assumed to increase cell transition rates in proportion to exposure concentration. This paper introduces a general mathematical modeling framework that includes the MVK and MS model families as special cases, but that shows how abrupt transitions in cancer hazard rates, considered as functions of exposure concentrations and durations, can emerge naturally in large cell populations even when the rates of cell-level events increase smoothly (e.g., proportionally) with concentration. In this framework, stochastic transitions of stem cells among successive events represent exposure-related damage. Cell proliferation, cell killing and apoptosis can occur at different stages. Key components include: An effective number of stem cells undergoing active cycling and hence vulnerable to stochastic transitions representing somatically heritable transformations. (These need not occur in any special linear order, as in the MS model.) A random time until the first malignant stem cell is formed. This is the first order-statistic, T = min{T1, T2, …, Tn} of n random variables, interpreted as the random times at which each of n initial stem cells or their progeny first become malignant. A random time for a normal stem cell to complete a full set of transformations converting it to a malignant one. This is interpreted very generally as the first passage time through a network of stochastic transitions, possibly with very many possible paths and unknown topology. In this very general family of models, threshold-like (J-shaped or multi-threshold) dose-response nonlinearities naturally emerge even without cytotoxicity, as consequences of stochastic phase transition laws for traversals of random transition networks. With cytotoxicity present, U-shaped as well as J-shaped dose-response curves can emerge. These results are universal, i.e., independent of specific biological details represented by the stochastic transition networks.
Interfaces | 2008
Louis Anthony Cox; Gerald G. Brown; Stephen M. Pollock
In areas of risk assessment ranging from terrorism to health, safety, and the environment, authoritative guidance urges risk analysts to quantify and display their uncertainties about inputs that significantly affect the results of an analysis, including their uncertainties about subjective probabilities of events. Such “uncertainty characterization” is said to be an important part of fully and honestly informing decision makers about the estimates and uncertainties in analyses that support policy recommendations, enabling them to make better decisions. But is it? Characterization of uncertainties about probabilities often carries zero value of information and accomplishes nothing to improve risk-management decisions. Uncertainties about consequence probabilities are not worth characterizing when final actions must be taken based on information available now.“But there seemed to be no chance of this, so she began looking at everything about her to pass away the time.”---Lewis Carroll, Alice in Wonderland
Dose-response | 2006
Louis Anthony Cox
The possibility of hormesis in individual dose-response relations undermines traditional epidemiological criteria and tests for causal relations between exposure and response variables. Non-monotonic exposure-response relations in a large population may lack aggregate consistency, strength, biological gradient, and other hallmarks of traditional causal relations. For example, a u-shaped or n-shaped curve may exhibit zero correlation between dose and response. Thus, possible hormesis requires new ways to detect potentially causal exposure-response relations. This paper introduces information-theoretic criteria for identifying potential causality in epidemiological data that may contain nonmonotonic or threshold dose-response nonlinearities. Roughly, exposure variable X is a potential cause of response variable Y if and only if: (a) X is INFORMATIVE about Y (i.e., the mutual information between X and Y, I(X; Y), measured in bits, is positive. This provides the required generalization of statistical association measures for monotonic relations); (b) UNCONFOUNDED: X provides information about Y that cannot be removed by conditioning on other variables. (c) PREDICTIVE: Past values of X are informative about future values of Y, even after conditioning on past values of Y; (d) CAUSAL ORDERING: Y is conditionally independent of the parents of X, given X. These criteria yield practical algorithms for detecting potential causation in cohort, case-control, and time series data sets. We illustrate them by identifying potential causes of campylobacteriosis, a food-borne bacterial infectious diarrheal illness, in a recent case-control data set. In contrast to previous analyses, our information-theoretic approach identifies a hitherto unnoticed, highly statistically significant, hormetic (U-shaped) relation between recent fast food consumption and womens risk of campylobacteriosis. We also discuss the application of the new information-theoretic criteria in resolving ambiguities and apparent contradictions due to confounding and information redundancy or overlap among variables in epidemiological data sets.
Journal of Heuristics | 1997
Louis Anthony Cox; Lawrence Davis; Leonard L. Lu; David Orvosh; Xiaorong Sun; Dean Sirovica
Designing cost-effective telecommunications networks often involves solving several challenging, interdependent combinatorial optimization problems simultaneously. For example, it may be necessary to select a least-cost subset of locations (network nodes) to serve as hubs where traffic is to be aggregated and switched; optimally assign other nodes to these hubs, meaning that the traffic entering the network at these nodes will be routed to the assigned hubs while respecting capacity constraints on the links; and optimally choose the types of links to be used in interconnecting the nodes and hubs based on the capacities and costs associated with each link type. Each of these three combinatorial optimization problems must be solved while taking into account its impacts on the other two. This paper introduces a genetic algorithm (GA) approach that has proved effective in designing networks for carrying personal communications services (PCS) traffic. The key innovation is to represent information about hub locations and their interconnections as two parts of a chromosome, so that solutions to both aspects of the problem evolve in parallel toward a globally optimal solution. This approach allows realistic problems that take 4–10 hours to solve via a more conventional branch-and-bound heuristic to be solved in 30–35 seconds. Applied to a real network design problem provided as a test case by Cox California PCS, the heuristics successfully identified a design 10% less expensive than the best previously known design. Cox California PCS has adopted the heuristic results and plans to incorporate network optimization in its future network designs and requests for proposals.
Enabling technologies for simulation science. Conference | 2003
Douglas A. Popken; Louis Anthony Cox
This paper describes initial research to define and demonstrate an integrated set of algorithms for conducting high-level Operational Simulations. In practice, an Operational Simulation would be used during an ongoing military mission to monitor operations, update state information, compare actual versus planned states, and suggest revised alternative Courses of Action. Significant technical challenges to this realization result from the size and complexity of the problem domain, the inherent uncertainty of situation assessments, and the need for immediate answers. Taking a top-down approach, we initially define the problem with respect to high-level military planning. By narrowing the state space we are better able to focus on model, data, and algorithm integration issues without getting sidetracked by issues specific to any single application or implementation. We propose three main functions in the planning cycle: situation assessment, parameter update, and plan assessment and prediction. Situation assessment uses hierarchical Bayes Networks to estimate initial state probabilities. A parameter update function based on Hidden Markov Models then produces revised state probabilities and state transition probabilities - model identification. Finally, the plan assessment and prediction function uses these revised estimates for simulation-based prediction as well as for determining optimal policies via Markov Decision Processes and simulation-optimization heuristics.