Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Toni Farley is active.

Publication


Featured researches published by Toni Farley.


Computers & Operations Research | 2007

Job scheduling methods for reducing waiting time variance

Nong Ye; Xueping Li; Toni Farley; Xiaoyun Xu

Minimizing Waiting Time Variance (WTV) is a job scheduling problem where we schedule a batch of n jobs, for servicing on a single resource, in such a way that the variance of their waiting times is minimized. Minimizing WTV is a well known scheduling problem, important in providing Quality of Service (QoS) in many industries. Minimizing the variance of job waiting times on computer networks can lead to stable and predictable network performance. Since the WTV minimization problem is NP-hard, we develop two heuristic job scheduling methods, called Balanced Spiral and Verified Spiral, which incorporate certain proven properties of optimal job sequences for this problem. We test and compare our methods with four other job scheduling methods on both small and large size problem instances. Performance results show that Verified Spiral gives the best performance for the scheduling methods and problems tested in this study. Balanced Spiral produces comparable results, but at less computational cost. During our investigation we discovered a consistent pattern in the plot of WTV over mean of all possible sequences for a set of jobs, which can be used to evaluate the sacrifice of mean waiting time while pursuing WTV minimization. 2005 Elsevier Ltd. All rights reserved.


Computers & Operations Research | 2005

Web server QoS models: applying scheduling rules from production planning

Nong Ye; Esma Senturk Gel; Xueping Li; Toni Farley; Ying Cheng Lai

Most web servers, in practical use, use a queuing policy based on the Best Effort model, which employs the first-in-first-out (FIFO) scheduling rule to prioritize web requests in a single queue. This model does not provide Quality of Service (QoS). In the Differentiated Services (DiffServ) model, separate queues are introduced to differentiate QoS for separate web requests with different priorities. This paper presents web server QoS models that use a single queue, along with scheduling rules from production planning in the manufacturing domain, to differentiate QoS for classes of web service requests with different priorities. These scheduling rules are Weighted Shortest Processing Time (WSPT), Apparent Tardiness Cost (ATC), and Earliest Due Date. We conduct simulation experiments and compare the QoS performance of these scheduling rules with the FIFO scheme used in the basic Best Effort model with only one queue, and the basic DiffServ model with two separate queues. Simulation results demonstrate better QoS performance using WSPT and ATC, especially when requested services exceed the capacity of a web server.


Information Systems Frontiers | 2006

An attack-norm separation approach for detecting cyber attacks

Nong Ye; Toni Farley; Deepak Lakshminarasimhan

The two existing approaches to detecting cyber attacks on computers and networks, signature recognition and anomaly detection, have shortcomings related to the accuracy and efficiency of detection. This paper describes a new approach to cyber attack (intrusion) detection that aims to overcome these shortcomings through several innovations. We call our approach attack-norm separation. The attack-norm separation approach engages in the scientific discovery of data, features and characteristics for cyber signal (attack data) and noise (normal data). We use attack profiling and analytical discovery techniques to generalize the data, features and characteristics that exist in cyber attack and norm data. We also leverage well-established signal detection models in the physical space (e.g., radar signal detection), and verify them in the cyberspace. With this foundation of information, we build attack-norm separation models that incorporate both attack and norm characteristics. This enables us to take the least amount of relevant data necessary to achieve detection accuracy and efficiency. The attack-norm separation approach considers not only activity data, but also state and performance data along the cause-effect chains of cyber attacks on computers and networks. This enables us to achieve some detection adequacy lacking in existing intrusion detection systems.


Journal of the American Medical Informatics Association | 2013

The biointelligence Framework: A new computational platform for biomedical knowledge computing

Toni Farley; Jeff Kiefer; Preston Victor Lee; Daniel D. Von Hoff; Jeffrey M. Trent; Charles J. Colbourn; Spyro Mousses

Breakthroughs in molecular profiling technologies are enabling a new data-intensive approach to biomedical research, with the potential to revolutionize how we study, manage, and treat complex diseases. The next great challenge for clinical applications of these innovations will be to create scalable computational solutions for intelligently linking complex biomedical patient data to clinically actionable knowledge. Traditional database management systems (DBMS) are not well suited to representing complex syntactic and semantic relationships in unstructured biomedical information, introducing barriers to realizing such solutions. We propose a scalable computational framework for addressing this need, which leverages a hypergraph-based data model and query language that may be better suited for representing complex multi-lateral, multi-scalar, and multi-dimensional relationships. We also discuss how this framework can be used to create rapid learning knowledge base systems to intelligently capture and relate complex patient data to biomedical knowledge in order to automate the recovery of clinically actionable information.


IEEE Computer | 2005

A scientific approach to cyberattack detection

Nong Ye; Toni Farley

Attack-norm separation uses rigorous signal detection models to isolate attack signals from normal data before attack identification. By drawing from science instead of heuristics, this approach promises more efficient, accurate, and inclusive attack identification.


Computers & Operations Research | 2005

Enhancing router QoS through job scheduling with weighted shortest processing time-adjusted

Nong Ye; Zhibin Yang; Ying Cheng Lai; Toni Farley

Most routers on the Internet employ a first-in-first-out (FIFO) scheduling rule to determine the order of serving data packets. This scheduling rule does not provide quality of service (QoS) with regards to the differentiation of services for data packets with different service priorities and the enhancement of routing performance. We develop a scheduling rule called Weighted Shortest Processing Time-Adjusted (WSPT-A), which is derived from WSPT (a scheduling rule for production planning in the manufacturing domain), to enhance router QoS. We implement a QoS router model based on WSPT-A and run simulations to measure and compare the routing performance of our model with that of router models based on the FIFO and WSPT scheduling rules. The simulation results show superior QoS performance when using the router model with WSPT-A.


Discrete Mathematics, Algorithms and Applications | 2009

MULTI-TERMINAL NETWORK CONNECTEDNESS ON SERIES-PARALLEL NETWORKS

Toni Farley; Charles J. Colbourn

Network operation may require that a specified number k of nodes be able to communicate via paths consisting of operating edges and nodes. In an environment of node and edge failure, this leads to associated reliability measures. When the k nodes are known in advance, this has been widely studied as k-terminal reliability; when the k nodes are chosen uniformly at random, this has been studied as k-resilience. A third notion, when it suffices to have anyk nodes communicate, is related to the expected size of the largest component in the network. We generalize these three measures to the probability that given h nodes chosen in advance and i nodes chosen at random, they appear in a component of size at least k = h + i + j. As expected, for general networks, for most choices of (h, i, j) the computation is #P-complete and hence unlikely to admit a polynomial time algorithm. We develop polynomial time algorithms in the special case that the network is series-parallel, which subsume and generalize earlier methods for k-terminal reliability and k-resilience.


design of reliable communication networks | 2009

Multiterminal measures for network reliability and resilience

Toni Farley; Charles J. Colbourn

Network reliability, specifically k-terminal reliability, gives the probability that k specified nodes in a network are connected. Multi-terminal network resilience measures the average k-terminal reliability over all node sets of size k. This is the expectation that a randomly chosen set of k nodes is connected. One may also ask for the probability that any k nodes are connected. This leads to three ways to require a set of k nodes be connected: the nodes are provided as input to the problem (as in reliability), they are randomly chosen (as in resilience), or they can be any k nodes. Certain problems may require a set constructed by some combination of the three. We introduce new measures to cover these possibilities, and reduce all measures to two general expressions that capture them. These expressions permit the consideration of decades of work on reliability to solve them. Additionally, we introduce six component-based network measures, and demonstrate how they can be solved alongside reliability and resilience. The component based measures admit even more variability in problem definition. In the end, we have thirteen distinct measures, and solve them simultaneously. An algorithm and example results are provided.


Ergonomics | 2003

A data mining technique for discovering distinct patterns of hand signs: implications in user training and computer interface design

Nong Ye; Xiangyang Li; Toni Farley

Hand signs are considered as one of the important ways to enter information into computers for certain tasks. Computers receive sensor data of hand signs for recognition. When using hand signs as computer inputs, we need to (1) train computer users in the sign language so that their hand signs can be easily recognized by computers, and (2) design the computer interface to avoid the use of confusing signs for improving user input performance and user satisfaction. For user training and computer interface design, it is important to have a knowledge of which signs can be easily recognized by computers and which signs are not distinguishable by computers. This paper presents a data mining technique to discover distinct patterns of hand signs from sensor data. Based on these patterns, we derive a group of indistinguishable signs by computers. Such information can in turn assist in user training and computer interface design.


Journal of the ACM | 2004

A Survey of BGP Security Issues and Solutions

Kevin R. B. Butler; Toni Farley; Patrick D. McDaniel; J. Rexfod

Collaboration


Dive into the Toni Farley's collaboration.

Top Co-Authors

Avatar

Nong Ye

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Spyro Mousses

Translational Genomics Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xueping Li

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Chen

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Ying Cheng Lai

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick D. McDaniel

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge