Brian A. Weiss
National Institute of Standards and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brian A. Weiss.
intelligent robots and systems | 2003
Adam Jacoff; Elena R. Messina; Brian A. Weiss; Satoshi Tadokoro; Yuki Nakagawa
In this paper, we discuss the development and proliferation of robot test arenas that provide tangible, realistic, and challenging environments for mobile robot researchers interested in urban search and rescue applications and other unstructured environments. These arenas allow direct comparison of robotic approaches, objective performance evaluation, and can ultimately provide a proving ground for field-able robotic systems such as those used at the World Trade Center collapse. International robot competitions using these arenas require robots to negotiate complex and collapsed structures, find simulated victims, and generate human readable maps of the environment. A performance metric is presented which quantifies several pertinent robot capabilities and produces an overall score used to evaluate and compare robotic implementations. Future directions for the arenas and the competitions are also discussed.
Journal of Intelligent Manufacturing | 2016
Gregory W. Vogl; Brian A. Weiss; Moneer M. Helu
Prognostics and health management (PHM) technologies reduce time and costs for maintenance of products or processes through efficient and cost-effective diagnostic and prognostic activities. PHM systems use real-time and historical state information of subsystems and components to provide actionable information, enabling intelligent decision-making for improved performance, safety, reliability, and maintainability. However, PHM is still an emerging field, and much of the published work has been either too exploratory or too limited in scope. Future smart manufacturing systems will require PHM capabilities that overcome current challenges, while meeting future needs based on best practices, for implementation of diagnostics and prognostics. This paper reviews the challenges, needs, methods, and best practices for PHM within manufacturing systems. This includes PHM system development of numerous areas highlighted by diagnostics, prognostics, dependability analysis, data management, and business. Based on current capabilities, PHM systems are shown to benefit from open-system architectures, cost-benefit analyses, method verification and validation, and standards.
performance metrics for intelligent systems | 2009
Craig I. Schlenoff; Gregory A. Sanders; Brian A. Weiss; Frederick M. Proctor; Michelle Potts Steves; Ann M. Virts
The Spoken Language Communication and Translation System for Tactical Use (TRANSTAC) program is a Defense Advanced Research Projects Agency (DARPA) advanced technology research and development program. The goal of the TRANSTAC program is to demonstrate capabilities to rapidly develop and field free-form, two-way translation systems that enable speakers of different languages to communicate with one another in realworld tactical situations without an interpreter. The National Institute of Standards and Technology (NIST), along with support from MITRE and Appen Pty Ltd., have been funded to serve as the Independent Evaluation Team (IET) for the TRANSTAC Program. The IET is responsible for analyzing the performance of the TRANSTAC systems by designing and executing multiple TRANSTAC evaluations and analyzing the results of the evaluation. To accomplish this, NIST has applied the SCORE (System, Component, and Operationally Relevant Evaluations) Framework. SCORE is a unified set of criteria and software tools for defining a performance evaluation approach for complex intelligent systems. It provides a comprehensive evaluation blueprint that assesses the technical performance of a system and its components through isolating variables as well as capturing end-user utility of the system in realistic use-case environments. This document describes the TRANSTAC program and explains how the SCORE framework was applied to assess the technical and utility performance of the TRANSTAC systems.
performance metrics for intelligent systems | 2008
Brian A. Weiss; Craig I. Schlenoff
NIST has developed the System, Component, and Operationally-Relevant Evaluations (SCORE) framework as a formal guide for designing evaluations of emerging technologies. SCORE captures both technical performance and end-user utility assessments of systems and their components within controlled and realistic environments. Its purpose is to present an extensive (but not necessarily exhaustive) picture of how a system would behave in a realistic operating environment. The framework has been applied to numerous evaluation efforts over the past three years producing valuable quantitative and qualitative metrics. This paper will present the building blocks of the SCORE methodology including the system goals and design criteria that drive the evaluation design process. An evolution of the SCORE framework in capturing utility assessments at the capability level of a system will also be presented. Examples will be shown of SCOREs successful application to the evaluation of the soldier-worn sensor systems and two-way, free-form spoken language translation technologies.
Journal of Field Robotics | 2007
Craig I. Schlenoff; Michelle Potts Steves; Brian A. Weiss; Michael O. Shneier; Ann M. Virts
Soldiers are often asked to perform missions that last many hours and are extremely stressful. After a mission is complete, the soldiers are typically asked to provide a report describing the most important things that happened during the mission. Due to the various stresses associated with military missions, there are undoubtedly many instances in which important information is missed or not reported and, therefore, not available for use when planning future missions. The ASSIST (Advanced Soldier Sensor Information System and Sensors Technology) program is addressing this challenge by instrumenting soldiers with sensors that they can wear directly on their uniforms. During the mission, the sensors continuously record what is going on around the soldier. With this information, soldiers are able to give more accurate reports without relying solely on their memory. In order for systems like this (often termed autonomous or intelligent systems) to be successful, they must be comprehensively and quantitatively evaluated to ensure that they will function appropriately and as expected in a wartime environment. The primary contribution of this paper is to introduce and define a framework and approach to performance evaluation called SCORE (System, Component, and Operationally Relevant Evaluation) and describe the results of applying it to evaluate the ASSIST technology. As the name implies, SCORE is built around the premise that, in order to get a true picture of how a system performs in the field, it must be evaluated at the component level, the system level, and in operationally relevant environments. The SCORE framework provides proven techniques to aid in the performance evaluation of many types of intelligent systems. To date, SCORE has only been applied to technologies under development (formative evaluation), but the authors believe that this approach would lend itself equally well to the evaluation of technologies ready to be fielded (summative evaluation).
Volume 2: Materials; Biomanufacturing; Properties, Applications and Systems; Sustainable Manufacturing | 2016
Moneer M. Helu; Brian A. Weiss
The development of digital technologies for manufacturing has been challenged by the difficulty of navigating the breadth of new technologies available to industry. This difficulty is compounded by technologies developed without a good understanding of the capabilities and limitations of the manufacturing environment, especially within small-to-medium enterprises (SMEs). This paper describes industrial case studies conducted to identify the needs, priorities, and constraints of manufacturing SMEs in the areas of performance measurement, condition monitoring, diagnosis, and prognosis. These case studies focused on contract and original equipment manufacturers with less than 500 employees from several industrial sectors. Solution and equipment providers and National Institute of Standards and Technology (NIST) Hollings Manufacturing Extension Partnership (MEP) centers were also included. Each case study involved discussions with key shop-floor personnel as well as site visits with some participants. The case studies highlight SMEs strong need for access to appropriate data to better understand and plan manufacturing operations. They also help define industrially-relevant use cases in several areas of manufacturing operations, including scheduling support, maintenance planning, resource budgeting, and workforce augmentation.
Manufacturing Review | 2016
Xiaoning Jin; David Siegel; Brian A. Weiss; Ellen Gamel; Wei Wang; Jay Lee; Jun Ni
A research study was conducted (1) to examine the practices employed by US manufacturers to achieve productivity goals and (2) to understand what level of intelligent maintenance technologies and strategies are being incorporated into these practices. This study found that the effectiveness and choice of maintenance strategy were strongly correlated to the size of the manufacturing enterprise; there were large differences in adoption of advanced maintenance practices and diagnostics and prognostics technologies between small and medium-sized enterprises (SMEs). Despite their greater adoption of maintenance practices and technologies, large manufacturing organizations have had only modest success with respect to diagnostics and prognostics and preventive maintenance projects. The varying degrees of success with respect to preventative maintenance programs highlight the opportunity for larger manufacturers to improve their maintenance practices and use of advanced prognostics and health management (PHM) technology. The future outlook for manufacturing PHM technology among the manufacturing organizations considered in this study was overwhelmingly positive; many manufacturing organizations have current and planned projects in this area. Given the current modest state of implementation and positive outlook for this technology, gaps, future trends, and roadmaps for manufacturing PHM and maintenance strategy are presented.
performance metrics for intelligent systems | 2010
Brian A. Weiss; Linda C. Schmidt
Technological evolutions are constantly occurring across advanced and intelligent systems across a range of fields including those within the military, law enforcement, automobile, and manufacturing industries. Testing the performance of these technologies is critical to (1) update the system designers of areas for improvement, (2) solicit end-user feedback during formative tests so that modifications can be made in future revisions, and (3) validate the extent of a technologys capabilities so that both sponsors, purchasers and end-users know exactly what they are receiving. Evaluation events can be minimally designed to include a few basic tests of key technology capabilities or they can evolve into extensive test events that emphasize multiple components and capabilities along with the complete system, itself. Tests of advanced and intelligent systems typically assume the latter and can occur frequently based upon system complexity. Numerous evaluation design frameworks have been produced to create test designs to appropriately assess the performance of intelligent systems. While most of these frameworks allow broad evaluation plans to be created, each framework has been focused to address specific project and/or technological needs and therefore has bounded applicability. This paper presents and expands upon the current development of the Multi-Relationship Evaluation Design (MRED) framework. Development of MRED is motivated by the desire to automatically create an evaluation framework capable of producing detailed evaluation blueprints while receiving uncertain input information. The authors will build upon their previous work in developing MRED through an initial discussion of key evaluation design elements. Additionally, the authors will elaborate upon their previously-defined relationships among evaluation personnel to define evaluation structural components pertaining to the evaluation scenarios, test environment, and data collection methods. These terms and their relationships will be demonstrated in an example evaluation design of an emerging technology.
Volume 5: 22nd International Conference on Design Theory and Methodology; Special Conference on Mechanical Vibration and Noise | 2010
Brian A. Weiss; Linda C. Schmidt; Harry A. Scott; Craig I. Schlenoff
As new technologies develop and mature, it becomes critical to provide both formative and summative assessments on their performance. Performance assessment events range in form from a few simple tests of key elements of the technology to highly complex and extensive evaluation exercises targeting specific levels and capabilities of the system under scrutiny. Typically the more advanced the system, the more often performance evaluations are warranted, and the more complex the evaluation planning becomes. Numerous evaluation frameworks have been developed to generate evaluation designs intent on characterizing the performance of intelligent systems. Many of these frameworks enable the design of extensive evaluations, but each has its own focused objectives within an inherent set of known boundaries. This paper introduces the Multi-Relationship Evaluation Design (MRED) framework whose ultimate goal is to automatically generate an evaluation design based upon multiple inputs. The MRED framework takes input goal data and outputs an evaluation blueprint complete with specific evaluation elements including level of technology to be tested, metric type, user type, and, evaluation environment. Some of MRED’s unique features are that it characterizes these relationships and manages their uncertainties along with those associated with evaluation input. The authors will introduce MRED by first presenting relationships between four main evaluation design elements. These evaluation elements are defined and the relationships between them are established including the connections between evaluation personnel (not just the users), their level of knowledge, and decision-making authority. This will be further supported through the definition of key terms. An example will be presented in which these terms and relationships are applied to the evaluation design of an automobile technology. An initial validation step follows where MRED is applied to the speech translation technology whose evaluation design was inspired by the successful use of a pre-existing evaluation framework. It is important to note that MRED is still in its early stages of development where this paper presents numerous MRED outputs. Future publications will present the remaining outputs, the uncertain inputs, and MRED’s implementation steps that produce the detailed evaluation blueprints.© 2010 ASME
Volume 9: 23rd International Conference on Design Theory and Methodology; 16th Design for Manufacturing and the Life Cycle Conference | 2011
Brian A. Weiss; Linda C. Schmidt
Advanced and intelligent systems within the manufacturing, military, homeland security, and automotive fields are constantly under development or improvement. Testing the performance of these technologies is critical to (1) notify the system designers of specific areas for improvement, (2) solicit end-user feedback, and (3) validate the extent of a technology’s capabilities. Evaluation designers have expended considerable effort in devising methodologies to stream-line the development of test plans in support of performance evaluation. The Multi-Relationship Evaluation Design (MRED) methodology is being developed to take multiple inputs from numerous input source categories and automatically output evaluation blueprints that specify the test characteristics. The MRED methodology is being created to have numerous advantages over current test design methods including 1) creating test plans to appraise both quantitative and qualitative performance of technologies that incorporate both human-controlled and autonomous capabilities, 2) speeding the test plan and implementation cycle to improve the effectiveness of a technology’s development cycle, and 3) factor in unknown and uncertain test plan input data. This paper will present the following: the MRED model will be discussed; detailed definitions and relevant relationships of the stakeholder input category will be presented; the output test plan element of evaluation personnel will be defined and their constraints discussed; the stakeholders’ influence on determining the selecting evaluation personnel will be presented including its initial formulation; and several examples of this cause (stakeholder preferences) and effect (evaluation personnel selection) relationship will be highlighted in two unique technology test plans.Copyright