Charles D. Nicholson
University of Oklahoma
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Charles D. Nicholson.
Reliability Engineering & System Safety | 2016
Charles D. Nicholson; Kash Barker; Jose Emmanuel Ramirez-Marquez
This work develops and compares several flow-based vulnerability measures to prioritize important network edges for the implementation of preparedness options. These network vulnerability measures quantify different characteristics and perspectives on enabling maximum flow, creating bottlenecks, and partitioning into cutsets, among others. The efficacy of these vulnerability measures to motivate preparedness options against experimental geographically located disruption simulations is measured. Results suggest that a weighted flow capacity rate, which accounts for both (i) the contribution of an edge to maximum network flow and (ii) the extent to which the edge is a bottleneck in the network, shows most promise across four instances of varying network sizes and densities.
Structure and Infrastructure Engineering | 2017
Weili Zhang; Naiyu Wang; Charles D. Nicholson
Abstract This paper presents a novel resilience-based framework to optimise the scheduling of the post-disaster recovery actions for road-bridge transportation networks. The methodology systematically incorporates network topology, redundancy, traffic flow, damage level and available resources into the stochastic processes of network post-hazard recovery strategy optimisation. Two metrics are proposed for measuring rapidity and efficiency of the network recovery: total recovery time (TRT) and the skew of the recovery trajectory (SRT). The TRT is the time required for the network to be restored to its pre-hazard functionality level, while the SRT is a metric defined for the first time in this study to capture the characteristics of the recovery trajectory that relates to the efficiency of those restoration strategies considered. Based on this two-dimensional metric, a restoration scheduling method is proposed for optimal post-disaster recovery planning for bridge-road transportation networks. To illustrate the proposed methodology, a genetic algorithm is used to solve the restoration schedule optimisation problem for a hypothetical bridge network with 30 nodes and 37 bridges subjected to a scenario seismic event. A sensitivity study using this network illustrates the impact of the resourcefulness of a community and its time-dependent commitment of resources on the network recovery time and trajectory.
Sustainable and Resilient Infrastructure | 2017
Kash Barker; James H. Lambert; Christopher W. Zobel; Andrea H. Tapia; Jose Emmanuel Ramirez-Marquez; Laura A. Albert; Charles D. Nicholson; Cornelia Caragea
Abstract Theory, methodology, and applications of risk analysis contribute to the quantification and management of resilience. For risk analysis, numerous complementary frameworks, guidelines, case studies, etc., are available in the literature. For resilience, the documented applications are sparse relative to numerous untested definitions and concepts. This essay on resilience analytics motivates the methodology, tools, and processes that will achieve resilience of real systems. The paper describes how risk analysts will lead in the modeling, quantification, and management of resilience for a variety of systems subject to future conditions, including technologies, economics, environment, health, developing regions, regulations, etc. The paper identifies key gaps where methods innovations are needed, presenting resilience of interdependent infrastructure networks as an example. Descriptive, predictive, and prescriptive analytics are differentiated. A key outcome will be the recognition, adoption, and advancement of resilience analytics by scholars and practitioners of risk analysis.
Reliability Engineering & System Safety | 2017
Yasser Almoghathawi; Kash Barker; Claudio M. Rocco; Charles D. Nicholson
Analyzing network vulnerability is a key element of network planning in order to be prepared for any disruptive event that might impact the performance of the network. Hence, many importance measures have been proposed to identify the important components in a network with respect to vulnerability and rank them accordingly based on individual importance measure. However, in this paper, we propose a new approach to identify the most important network components based on multiple importance measures using a multi criteria decision making (MCDM) method, namely the technique for order performance by similarity to ideal solution (TOPSIS), able to take into account the preferences of decision-makers. We consider multiple edge-specific flow-based importance measures provided as the multiple criteria of a network where the alternatives are the edges. Accordingly, TOPSIS is used to rank the edges of the network based on their importance considering multiple different importance measures. The proposed approach is illustrated through different networks with different densities along with the effects of weighs.
Sustainable and Resilient Infrastructure | 2016
Weili Zhang; Charles D. Nicholson
Abstract One strategy to mitigate social and economic vulnerabilities of communities to natural disasters is to enhance the current infrastructure underlying the community. Decisions regarding allocation of limited resources to improve infrastructure components are complex and involve various trade-offs. In this study, an efficient multi-objective optimization model is proposed to support decisions regarding building retrofits within a community. In particular, given a limited budget and a heterogeneous commercial and residential building stock, solutions to the proposed model allow a detailed analysis of the trade-offs between direct economic loss and the competing objective of minimizing immediate population dislocation. The developed mathematical model is informed by earthquake simulation modeling as well as population dislocation modeling from the field of social science. The model is applied to the well-developed virtual city, Centerville, designed collaboratively by a team of engineering experts, economists, and social scientists. Multiple Pareto optimal solutions are computed in the case study and a detailed analysis regarding the various decision strategies is provided.
Journal of Biomedical Informatics | 2017
Charles D. Nicholson; Leslie Goodwin; Corey Clark
A new search heuristic, Divided Neighborhood Exploration Search, designed to be used with inference algorithms such as Bayesian networks to improve on the reverse engineering of gene regulatory networks is presented. The approach systematically moves through the search space to find topologies representative of gene regulatory networks that are more likely to explain microarray data. In empirical testing it is demonstrated that the novel method is superior to the widely employed greedy search techniques in both the quality of the inferred networks and computational time.
Computers & Industrial Engineering | 2016
Weili Zhang; Charles D. Nicholson
Introduce a statistical learning approach to a traditional optimization problem.Problem is reformulated as a linear relaxation based on model predictions.Method can be part of an exact solution strategy and used as a primal heuristic.Empirical tests demonstrate improved solutions over leading commercial software.Incremental solution time is negligible for large problems. A new heuristic procedure for the fixed charge network flow problem is proposed. The new method leverages a probabilistic model to create an informed reformulation and relaxation of the FCNF problem. The technique relies on probability estimates that an edge in a graph should be included in an optimal flow solution. These probability estimates, derived from a statistical learning technique, are used to reformulate the problem as a linear program which can be solved efficiently. This method can be used as an independent heuristic for the fixed charge network flow problem or as a primal heuristic. In rigorous testing, the solution quality of the new technique is evaluated and compared to results obtained from a commercial solver software. Testing demonstrates that the novel prediction-based relaxation outperforms linear programming relaxation in solution quality and that as a primal heuristic the method significantly improves the solutions found for large problem instances within a given time limit.
Computers & Industrial Engineering | 2016
Charles D. Nicholson; Weili Zhang
A predicative model is investigated to determine whether or not arcs are selected in an optimal solution of a FCNF problem.The accuracy of the predictive mode is very high.The model has useful explanatory power regarding the predictors defined.Component importance measure is developed to rank the arcs in the network. The fixed charge network flow (FCNF) problem is a classical NP-hard combinatorial problem with wide spread applications. To the best of our knowledge, this is the first paper that employs a statistical learning technique to analyze and quantify the effect of various network characteristics relating to the optimal solution of the FCNF problem. In particular, we create a probabilistic classifier based on 18 network related variables to produce a quantitative measure that an arc in the network will have a non-zero flow in an optimal solution. The predictive model achieves 85% cross-validated accuracy. An application employing the predictive model is presented from the perspective of identifying critical network components based on the likelihood of an arc being used in an optimal solution.
Archive | 2015
Kash Barker; Charles D. Nicholson; Jose Emmanuel Ramirez-Marquez
Network resilience to a disruption is generally considered to be a function of the initial impact of the disruption (the network’s vulnerability) and the trajectory of recovery after the disruption (the network’s recoverability). In the context of network resilience, this work develops and compares several flow-based importance measures to prioritize network edges for the implementation of preparedness options. For a particular preparedness option and particular geographically located disruption, we compare the different importance measures in their resulting network vulnerability, as well as network resilience for a general recovery strategy. Results suggest that a weighted flow capacity rate, which accounts for both (i) the contribution of an edge to maximum network flow and (ii) the extent to which the edge is a bottleneck in the network, shows most promise across four instances of varying network sizes and densities. Resilience, broadly defined as the ability to stave off the effects of a disruption and subsequently return to a desired state, has been studied across a number of fields, including engineering (Hollnagel et al. 2006, Ouyang and DuenasOsorio 2012) and risk contexts (Haimes 2009, Aven 2011), to name a few. Resilience has increasingly been seen in the literature (Park et al. 2013), recognizing the need to prepare for the inevitability of disruptions. Figure 1: Graphical depiction of network performance, φ(t), over time (adapted from Henry and Ramirez-Marquez (2012)). Figure 1 illustrates three dimensions of resilience: reliability, vulnerability, and recoverability. The network service function φ(t) describes the behavior or performance of the network at time t (e.g., φ(t) could describe traffic flow or delay for a highway network). Prior to disruption e, the ability of the network to meet performance expectations is described by its reliability, often considered to the likelihood of connectivity of a network. Research in the area of recoverability is related to understanding the ability and speed of networks to recover after a disruptive event, similar in concept to rapidity in the “resilience triangle” literature in civil infrastructure (Bruneau et al. 2003). Emphasis in this paper is placed on the vulnerability dimension, or the ability of e to impact network performance in an adverse manner is a function of the network’s vulnerability (Nagurney and Qiang 2008, Zio et al. 2008, Zhang et al. 2011), similar in concept to 12 th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015 2 robustness in the “resilience triangle” literature. Haimes (2006) broadly offers that that the states of the system are described by a state vector, suggesting that vulnerability is multifaceted (i.e., certain aspects of a system may be adversely affected by certain events and not others). Our work adopts this qualitative perspective, though we assume that the vulnerabilities found in the different aspects of the network can still be measured by changes in a single network service function φ(t) . As such, Jonsson et al. (2008) define vulnerability appropriately for our work as the magnitude of damage given the occurrence of a particular disruptive event. Networks have been characterized in two broad categories with respect to how their vulnerability is analyzed (Mishkovski et al. 2011): (i) those that involve “structural robustness,” or how networks behave after the removal of a set of nodes or links based only on topological features, and (ii) those that involve “dynamic robustness,” or how networks behave after the removal of a set of nodes or links given load redistribution leading to potential cascading failures. With respect to Figure 1, in networks that are primarily described by structural robustness (e.g., inland waterway, railway), te and td would coincide with each other such that network performance drops immediately as disruption e occurs. The performance of networks exhibiting dynamic robustness would dissipate over time after a disruption due to cascading effects (e.g., electric power networks), such that td is subsequent to te . This paper focuses on networks described by structural robustness. Emphasis is placed on vulnerability in the larger context of network resilience. Resilience is defined here as the time dependent ratio of recovery over loss, noting the notation for resilience, Я (Whitson and Ramirez-Marquez 2009) as R is commonly reserved for reliability. Similar in concept to the resilience triangle, we make use of the resilience paradigm provided in Figure 1, and we quantify resilience with Eq. (1) (Pant et al. 2014, Baroud et al. 2014, Barker et al. 2013, Henry and Ramirez-Marquez 2012). φ(t0) is the “as-planned” performance level of the network, td is the point in time after the disruption where network performance is at its most disrupted level, and recovery of the network occurs between times ts and tf. Яφ(t|e ) = φ(t|e)−φ(td|e j) φ(t0)−φ(td|e j) (1) 1. QUANTIFYING NETWORK VULNERABILITY A common approach to quantifying network vulnerability is with graph invariants (e.g., connectivity, diameter, betweenness centrality) as deterministic measures (Boesch et al. 2009). We focus on tangible metrics of network behavior in the form of a flow-based service function, φ(t) , rather than graph theoretic measures of performance. For this work, we choose all node pairs average maximum flow for φ, calculated by finding the maximum flow from a source node s to a sink node t, then exhausting all (s, t) pairs across the network and averaging the maximum flow for each (s, t) pair. This work considers geographic based physical networks with capacitated and symmetric arcs. Examples include transportation networks in which traffic per hour on a roadway or bridges with weight restrictions constrain traffic flow. We consider a class of disruptive events that impair the capacity of one or more edges in the network. To prioritize preemptive efforts to reduce network-wide vulnerability, we develop a variety of edge-specific, flow-based metrics to identify the most important edges. Edges deemed as the most important can be reinforced or otherwise protected prior to any event to reduce network vulnerability or can be candidates for expedited recovery (though we focus on the vulnerability, and not recoverability, aspect of network resilience in this work). In this section we provide details concerning various candidate edge importance measures relating to network vulnerability. 12 th International Conference on Applications of Statistics and Probability in Civil Engineering, ICASP12 Vancouver, Canada, July 12-15, 2015 3 1.1. Notation Let G = (V, E) denote a directed graph where V is a set of n vertices (also called nodes) and E ⊆ V × V is a set of m directed edges (also called arcs or links). For (i, j) ∈ E , the initial vertex i is called the tail and the terminal vertex j is called the head. Let cij and xij denote the capacity and flow on edge (i, j) ∈ E , respectively. A directed path P from a source node s to a target node t is a finite, alternating sequence of vertices and one or more edges starting at node s and ending at node t , P = {s, (s, v1), v1, (v1, v2), v2, ... , (vk, t), t} where all of the odd elements are distinct nodes in V and the even elements are directed edges in E. All nodes other than s and t are referred to as internal nodes. The length of path P is the number of edges it contains. The maximum capacity of a path is equal to the minimum capacity of all edges in the path. That is, the max capacity of path P equals min(i,j)∈P cij. The s-t max flow problem utilizes a subset of all possible paths between s and t to route a maximum amount of a commodity from s to t without exceeding the capacity of any edge. 1.2. Proposed Importance Measures Several importance measures for components of graphs have previously been offered. A frequent theme in these measures is the notion of centrality [Anthonisse 1971, Freeman 1977]. Edge betweenness, for example, of (i, j)eE is a function of the number of shortest paths between nodes s and t which include edge (i, j). The edge betweenness centrality of (i, j) is the sum of its edge betweenness for all s t pairs. Newman [2004] introduced a modified edge centrality that does not restrict the metric to only shortest paths between s and t but stochastically includes other paths. In our work we introduce or otherwise consider several flow-based and topological measures relating to max flow paths within a graph. 1.2.1. All Pairs Max Flow Edge Count The first importance measure is inspired by the basic edge betweenness centrality concept. However instead of shortest paths, we consider max flow paths. The all pairs max flow edge count is the total number of times a given edge is utilized in all s-t pairs max flow problems. The intuition is that if an edge is used more often than others in contributing to maximum flow, then a disruption that impacts its capacity is likely to have a significant impact on network performance φ. Let μst(i, j) = 1 if edge (i, j) is used in a given s t max flow problem and 0 otherwise. We define the first candidate for edge importance based on the raw max flow edge tally divided by the total number of s t pairs, as shown in Eq. (2). If multiple paths share a minimally capacitated edge, there will be multiple paths that contribute the same value to a given s t max flow problem. We arbitrarily choose among the shortest of these otherwise equally capacitated paths.
Journal of Structural Engineering-asce | 2018
Weili Zhang; Peihui Lin; Naiyu Wang; Charles D. Nicholson; Xianwu Xue