Ronald D. Armstrong
Rutgers University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ronald D. Armstrong.
Annals of Operations Research | 2008
Ronald D. Armstrong; Su Gao; Lei Lei
AbstractnIn this paper, we study the zero-inventory production and distribution problem with a single transporter and a fixed sequence of customers. The production facility has a limited production rate, and the delivery truck has non-negligible traveling times between locations. The order in which customers may receive deliveries is fixed. Each customer requests a delivery quantity and a time window for receiving the delivery. The lifespan of the product starts as soon as the production for a customer’s order is finished, which makes the product expire in a constant time. Since the production facility and the shipping truck are limited resources, not all the customers may receive the delivery within their specified time windows and/or within product lifespan. The problem is then to choose a subset of customers from the given sequence to receive the deliveries to maximize the total demand satisfied, without violating the product lifespan, the production/distribution capacity, and the delivery time window constraints. We analyze several fundamental properties of the problem and show that these properties can lead to a fast branch and bound search procedure for practical problems. A heuristic lower bound on the optimal solution is developed to accelerate the search. Empirical studies on the computational effort required by the proposed search procedure comparing to that required by CPLEX on randomly generated test cases are reported.n
Psychometrika | 1992
Ronald D. Armstrong; Douglas H. Jones; Ing-Long Wu
Binary programming models are presented to generate parallel tests from an itembank. The parallel tests are created to match item for item an existing seed test and match user supplied taxonomic specifications. The taxonomic specifications may be either obtained from the seed test or from some other user requirement. An algorithm is presented along with computational results to indicate the overall efficiency of the process. Empirical findings based on an itembank for the Arithmetic Reasoning section of the Armed Services Vocational Aptitude Battery are given.
Applied Psychological Measurement | 1998
Ronald D. Armstrong; Douglas H. Jones; Charles S. Kunce
The use of mathematical programming techniques to generate parallel test forms with passages and item characteristics based on item response theory was investigated, using the Fundamentals of Engineering Examination. The problem of creating one test form is modeled as a network-flow problem with additional constraints. This formulation is then used in a heuristic assembly of several parallel forms. The network-flow problem is solved with a special-purpose combinatorial polynomial algo-rithm. The non-network constraints are handled using Lagrangian relaxation and heuristic search techniques. From an item bank with almost 1,100 items, four parallel test forms with 157 items each were generated in 3 minutes. The results of the math-ematical programming approach were compared with human-generated forms. It was concluded that the mathematical programming approach can produce test forms of the same quality as those produced entirely by human effort.
Applied Psychological Measurement | 2009
Ronald D. Armstrong; Min Shi
This article develops a new cumulative sum (CUSUM) statistic to detect aberrant item response behavior. Shifts in behavior are modeled with quadratic functions and a series of likelihood ratio tests are used to detect aberrancy. The new CUSUM statistic is compared against another CUSUM approach as well as traditional person-fit statistics. A simulation study demonstrates the advantage of the proposed method. Also, the person-fit methods are applied to real response data from the administration of a high-stakes exam. The use of CUSUM charts to help visually identify types of aberrant behavior is demonstrated.
Journal of Educational and Behavioral Statistics | 1994
Ronald D. Armstrong; Douglas H. Jones; Zhaobo Wang
A network-flow model is formulated for constructing parallel tests based on classical test theory using test reliability for the criterion. The model enables practitioners to specify a test difficulty distribution for the values of the item difficulties as well as test composition requirements. Use of the network-flow algorithm ensures high computational efficiency, allowing wide applications of optimal test construction using microcomputers. The results of an empirical study show that the generated tests have acceptably high test reliability.
Applied Psychological Measurement | 2005
Dmitry I. Belov; Ronald D. Armstrong
A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal chance of being built during an assembly. This allows the authors to address the following issues of pool analysis and extension: compare the strengths and weaknesses of different pools, identify the most restrictive constraint(s) for test assembly, and identify properties of the items that should be added to a pool to achieve greater usability of the pool. Computer experiments with operational pools are given.
British Journal of Mathematical and Statistical Psychology | 2011
Dmitry I. Belov; Ronald D. Armstrong
The Kullback-Leibler divergence (KLD) is a widely used method for measuring the fit of two distributions. In general, the distribution of the KLD is unknown. Under reasonable assumptions, common in psychometrics, the distribution of the KLD is shown to be asymptotically distributed as a scaled (non-central) chi-square with one degree of freedom or a scaled (doubly non-central) F. Applications of the KLD for detecting heterogeneous response data are discussed with particular emphasis on test security.
Applied Psychological Measurement | 2010
Dmitry I. Belov; Ronald D. Armstrong
This article presents a new method to detect copying on a standardized multiple-choice exam. The method combines two statistical approaches in successive stages. The first stage uses Kullback-Leibler divergence to identify examinees, called subjects, who have demonstrated inconsistent performance during an exam. For each subject the second stage uses the K-Index to search for a possible source of the responses. Both stages apply a hypothesis test given a significance level. Monte Carlo methods are applied to approximate empirical distributions and then compute critical values providing a low Type I error rate and a good copying-detection rate. The results with both simulated and empirical data demonstrate the effectiveness of this approach.
Applied Psychological Measurement | 1992
Ronald D. Armstrong; Douglas H. Jones
To estimate test reliability and to create parallel tests, test items frequently are matched. Items can be matched by splitting tests into parallel test halves, by creating T splits, or by matching a desired test form. Problems often occur. Algorithms are presented to solve these problems. The algorithms are based on optimization theory in networks (graphs) and have polynomial complexity. Computational results from solving sample problems with several hundred decision variables are reported
Applied Psychological Measurement | 2004
Ronald D. Armstrong; Douglas H. Jones; Nicole B. Koppel; Peter J. Pashley
A multiple-form structure (MFS) is an orderedcollection or network of testlets (i.e., sets of items).An examinee’s progression through the networkof testlets is dictated by the correctness of anexaminee’s answers, thereby adapting the test tohis or her trait level. The collection of pathsthrough the network yields the set of all possibletest forms, allowing test specialists the opportunityto review them before they are administered. Also,limiting the exposure of an individual MFS to aspecific period of time can enhance test security.This article provides an overview of methods thathave been developed to generate parallel MFSs.The approach is applied to the assembly of anexperimental computerized Law School Admission Test (LSAT).