Linda C. Malone
University of Central Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Linda C. Malone.
Informing Science The International Journal of an Emerging Transdiscipline | 2004
Deborah Sater Carstens; Pamela R. McCauley-Bell; Linda C. Malone; Ronald F. DeMara
Introduction The increase in computing and networking expansion as well as increases in threats have enhanced the need to perpetually manage information security within an organization. Although there is literature addressing the human side of information security, events such as 9/11 and the war on terrorism has created more of a burden for organizations, government and private industry, enhancing the need for more research in information security. Carnegie Mellons Computer Emergency Response Team (2004) has collected statistics showing that 6 security incidents were reported in 1988 compared to 137,529 in 2003. A survey by the Federal Bureau of Investigation (FBI) suggested that 40% of organizations surveyed claimed that system penetrations from outside their organization have increased from the prior year by 25% (Ives, Walsh, & Schneider, 2004). The U.S. Department of Homeland Security (2002) is concerned with the need for information security measures. Therefore, the Federal Information Security Management Act of 2002 was put into place for the purposes of protecting information and systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide integrity, confidentiality, and availability of information. The government has an information security responsibility ranging from protecting intelligence information to issuing social security numbers for each citizen. Private industry must also be concerned with information security as it is vital for the livelihood of any company to protect customers personal information along with the management of each companys supply chain (Olivia, 2003). Earlier research identified the presence of human error risks to the security of information systems (Wood & Banks 1993, Courtney as cited in NIST, 1992). A survey conducted by one of the authors, identified password issues as the second most likely human error risk factor to impact an information system. The significance of this is enhanced when realizing that passwords are the primary source of user authentication for the majority of personal and private information systems. The past research findings of password issues as a human error risk factor has been further identified as a threat to security by the University of Findlay Center for Terrorism Preparedness (2003), who developed a vulnerability assessment methodology to better help organizations identify their weaknesses in terms of information security. Extensive password requirements can overload human memory capabilities as the number of passwords and their complexity level increases. The exponential growth in security incidents (Carnegie Mellon Computer Emergency Response Team, 2004) requires a comprehensive approach to the development of password guidelines which do not exceed human memory limitations yet maintain strength of passwords as necessitated by the information technology (IT) community. The IT community consists of network administrators or security officers who are directly responsible for information security in terms of integrity, confidentiality, and availability of information. In earlier investigations, over 50% of incidents that occur within government and private organizations have been connected to human errors (NIST, 1992). The impact of human error on information security is an important issue that left unresolved can have adverse affects on industry. This research is focused on measuring the impact of password demands as a means of authentication and mitigating the risks that result when these demands exceed human capabilities. Literature Review Information Security Information security involves making information accessible to those who need the information, while maintaining integrity and confidentiality. The three categories that are used to classify information security risks are confidentiality, integrity, and accessibility or availability of information (U. …
Annals of Emergency Medicine | 1991
John F O'Brien; Jay L. Falk; Brian E Carey; Linda C. Malone
STUDY OBJECTIVES We studied the hypothesis that rectal thiopental is an effective agent for emergency department pediatric sedation and may have advantages over a more traditional regimen. DESIGN Rectal thiopental 25 mg/kg was compared with the combination of meperidine 2 mg/kg, promethazine 1 mg/kg, and chlorpromazine 1 mg/kg in a prospective, randomized, double-blinded study. TYPE OF PARTICIPANTS Children between 18 months and 6 years of age presenting to our teaching hospital ED for laceration repair were entered after the clinical decision was made to sedate. Patients with altered sensorium, medical contraindications to sedation, or medication allergy were excluded. INTERVENTIONS After informed consent, each patient received IM injection (drug combination or placebo) and rectal suspension (rectal thiopental or placebo) simultaneously. MEASUREMENTS AND MAIN RESULTS Vital signs, pulse oximetry, and pediatric Glasgow Coma Scores were recorded before and every 15 minutes after sedation until discharge. Intradermal lidocaine and suturing began when the patient appeared adequately sedated, and response was numerically scored. Patients were discharged when able to stand. Twenty-nine patients 34 +/- 13 months old were studied. Fifteen patients received rectal thiopental, and 14 received the drug combination. Analysis using the Wilcoxon two-sample test revealed no differences in age, sex, weight, or wound location between groups. The time course of sedation was different for the two treatment regimens. At 15 and 30 minutes after administration, patients who received rectal thiopental were more deeply sedated than those who received the drug combination, as evidenced by significantly lower Glasgow Coma Scores (P less than .05). Accordingly, time from medication administration to suturing was 29 +/- 12 minutes in the thiopental group and 54 +/- 33 minutes (P less than .01) in the drug combination group. Patients in the thiopental group also recovered more quickly and were discharged approximately one-half hour earlier than those in the drug combination group (89 +/- 25 vs 120 +/- 44 minutes, P less than .05). No difference in response to lidocaine injection or suturing was demonstrated between the groups. Laceration repair time was comparable between the groups. There were eight sedation failures (three of 15 in thiopental group and five of 14 in drug combination group, P = NS). Vital signs remained stable, no adverse reactions occurred, and no patient had decreased oxygen saturation to less than 95%. CONCLUSION Rectal thiopental is superior to this drug combination for pediatric sedation because it can be administered painlessly, has a more rapid onset and offset of action, and is of equal safety and efficacy at the dosage studied.
winter simulation conference | 2001
L. Trocine; Linda C. Malone
Screening is the first phase of an experimental study on systems and simulation models. Its purpose is to eliminate negligible factors so that efforts may be concentrated upon just the important ones. Successfully screening of more than about 20 or 30 factors has been investigated only in the past 10 or 15 years with most improvements in the past 5 years. A handful of alternative methods including sequential bifurcation, iterated fractional factorial designs, and the Trocine Screening Procedure are described and evaluative and comparative results are presented.
Supply Chain Management | 2008
Mariah M. Jeffery; Renee J. Butler; Linda C. Malone
Purpose – The purpose of this paper is to provide an approach for determining inventory levels that result in a minimum cost customer service level for specific products based on their demand characteristics and profit margin.Design/methodology/approach – The paper uses logistic regression to quantify the relationship between customer service level and inventory on‐hand in relation to forecasted demand, as well to estimate the impact of factors such as forecast accuracy, customer lead‐times, and demand variability on this relationship. It then performs financial analysis in order to associate a cost with customer service level.Findings – Empirical results based on data from a semiconductor manufacturer indicate significant cost‐savings can be achieved by applying the proposed method over the organizations current ad hoc practices.Research limitations/implications – The minimum cost customer service level identified via the methodology is based on values of dynamic factors that are specific to the time wh...
winter simulation conference | 1999
Theodora Ivanova; Linda C. Malone; Mansooreh Mollaghasemi
The focus of the paper is on the comparison of results obtained using and not using group screening in an experimental design methodology applied to a semiconductor manufacturing simulation model. A wholeline simulation model of a semiconductor fab is built. The model includes more than 200 tools used in manufacturing 2 products with around 250 steps each. Output analysis results for the equipment utilization and queue sizes have identified the three most critical equipment groups in the fab. Seventeen input factors are set for investigation through a 2-stage group-screening experiment and a fractional factorial using all 17 factors. The result illustrates that the final models can be quite different. While group screening used with simulation can be an appealing, flexible, tractable tool for capacity analysis of a semiconductor manufacturing facility, one must be concerned with the fact that the two techniques can give different answers to the users. Additionally, researchers need to address the proper choice of significance level for group screening.
Human Factors | 2007
Roberto K. Champney; Kay M. Stanney; Phillip A. K. Hash; Linda C. Malone; Robert S. Kennedy; Daniel E. Compton
Objective: This study investigated potential means of facilitating a return to normal functioning following virtual environment (VE) exposure using a peg-in-hole exercise in recalibrating hand-eye coordination, a targeted gait movement (rail walking) in recalibrating vestibular (i.e., postural) aftereffects, and natural decay. Background: Despite technology advances and considerable efforts focused on the identification and quantification of VE aftereffects, few have addressed means for recuperation, the focus of the current study. Method: After 15 min—60 min of VE exposure and recalibatory exercises, hand-eye coordination and postural stability were assessed electronically, the former via a 3-D measure capturing pointing errors, and the latter by head and body oscillations while standing in the tandem Romberg position. Both measurements were collected immediately after VE exposure and every 15 min up to 1 hr thereafter. Results: Participants (more than 900 college students) who experienced the peg-in-hole readaptation strategy had a significant decrease (p < 0.000 in pointing errors following the exercise; the other two methods (i.e., rail walking, natural decay) showed no significant change. For posture, all groups showed significant improvement during the 15 minutes after VE exposure, yet none returned to baseline by 1 hr postexposure. Conclusion: Although hand-eye coordination readaptation strategies showed noticeable effects immediately after they were performed, aftereffects were not completely eliminated after 1 hr; hence further research on readaptation strategies is essential to achieve more substantial recalibratory gains in hand-eye coordination and posture. Additionally, hand-eye coordination and vestibular aftereffects may require a period exceeding the VE immersion time in order to recover. Application: These findings may serve as a guide in the development of monitoring policies following VE exposure.
Journal of Intelligent Transportation Systems | 2005
Haitham Al-Deek; Ayman Mohamed; Linda C. Malone
This article presents a discrete-event stochastic microscopic simulation model specifically developed to evaluate the operational performance of toll plazas. The model has been calibrated, validated, and applied to toll plazas equipped with Electronic Toll Collection (ETC) in Orlando, Florida. Traffic behavior is represented using a set of mathematical and logic algorithms that control the conflicts among vehicles within the toll plaza area. Modified versions of Car-Following and Lane-Changing Algorithms and a new Toll-Lane Selection Algorithm are integrated into this new model to simulate traffic operation at toll plazas. The model output includes Measures of Effectiveness that can be used to evaluate the performance of existing and future individual toll lanes and the entire toll plaza system. Real-life data collected at the busiest toll plaza in the Orlando-Orange County Expressway Authority (OOCEA) system was compared with the model output to validate the developed model. Statistical tests indicated that there is no significant difference at the 95% confidence level between Measures of Effectiveness obtained from the model and those collected in the real world.
winter simulation conference | 2008
David J. Kaup; Tom Clarke; Rex Oleson; Linda C. Malone; Florian Jentsch
Very few crowds consist of individuals who are exactly the same. Defining variables, such as age, and how they affect an individual¿s movement, could increase realism in simulations of crowd movement. In this paper, we present and discuss how age variations of individuals can be included in crowd simulations. Starting with the Helbing, Molnar, Farkas, and Vicsek model (HMFV), we modeled age differences by modifying the strength of the existing social forces. We created simulation scenarios with the varied strengths and used multiple approaches for validation, including experts¿ subjective validation and experimental validation via comparison of model predictions with observed crowd movements. The results indicated that individual characteristics such as age can be modeled by social forces. Future extensions of our work would be to include individuals, small subgroups, and/or large groups of people to model multicultural crowd behavior.
The American Statistician | 1984
Charles J. Monlezun; David C. Blouin; Linda C. Malone
Abstract The terms “split plot” and “repeated measures” are used by many authors throughout the statistical literature. However, the former is not used in a consistent manner, since many authors use this term in reference to inherently different experimental situations, each requiring a different analysis. Conversely, quite often inherently different experiments share a common analysis, as is the case with some types of split-plot and repeated measures experiments. Four distinct analyses are employed by various authors when describing split plot and repeated measures experiments in the classic literature. Four linear models, accounting for the four analyses, are stated in this article. Detailed examples are given that illustrate the relationships between experimental setup, model specification, and subsequent analysis.
winter simulation conference | 2004
Christopher Michael Hill; Linda C. Malone
Using simulated data to develop and study diagnostic tools for data analysis is very beneficial. The user can gain insight about what happens when assumptions are violated since the true model is known. However, care must be taken to be sure that the simulated data is a reasonable representation of what one would usually expect in the real world. This paper discusses the construction of simulated data sets and provides specific examples using linear and logistic regression analysis. It also addresses the execution of simulation based data studies following data construction.