Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David A. Koonce is active.

Publication


Featured researches published by David A. Koonce.


Computers & Industrial Engineering | 2000

Using data mining to find patterns in genetic algorithm solutions to a job shop schedule

David A. Koonce; S.-C Tsai

Abstract This paper presents a novel use of data mining algorithms for the extraction of knowledge from a large set of job shop schedules. The purposes of this work is to apply data mining methodologies to explore the patterns in data generated by a genetic algorithm performing a scheduling operation and to develop a rule set scheduler which approximates the genetic algorithms scheduler. Genetic algorithms are stochastic search algorithms based on the mechanics of genetics and natural selection. Because of genetic inheritance, the characteristics of the survivors after several generations should be similar. In using a genetic algorithm for job shop scheduling, the solution is an operational sequence for resource allocation. Among these optimal or near optimal solutions, similar relationships may exist between the characteristics of operations and sequential order. An attribute-oriented induction methodology was used to explore the relationship between an operations’ sequence and its attributes and a set of rules has been developed. These rules can duplicate the genetic algorithms performance on an identical problem and provide solutions that are generally superior to a simple dispatching rule for similar problems.


Computers in Industry | 2003

A hierarchical cost estimation tool

David A. Koonce; Robert P. Judd; Dusan N. Sormaz; Dale T. Masel

The estimation of the manufacturing cost of a part in all phases of the design stage is crucial to concurrent engineering. To better estimate the cost for a product, data must be available from both engineering systems and business systems. This paper presents a cost estimation system being developed to support design time cost estimation using the Federated Intelligent Product EnviRonment (FIPER), which is being developed as part of the National Institute of Standards and Technology (NIST) Advanced Technology Program (ATP). The FIPER research team is developing an architecture that interconnects design and analysis software tools in a peer level architecture to support multidisciplinary design optimization (MDO), design for six sigma (DFSS) and robust design.


Journal of Intelligent Manufacturing | 2008

A neural network job-shop scheduler

Gary R. Weckman; Chandrasekhar Ganduri; David A. Koonce

This paper focuses on the development of a neural network (NN) scheduler for scheduling job-shops. In this hybrid intelligent system, genetic algorithms (GA) are used to generate optimal schedules to a known benchmark problem. In each optimal solution, every individually scheduled operation of a job is treated as a decision which contains knowledge. Each decision is modeled as a function of a set of job characteristics (e.g., processing time), which are divided into classes using domain knowledge from common dispatching rules (e.g., shortest processing time). A NN is used to capture the predictive knowledge regarding the assignment of operation’s position in a sequence. The trained NN could successfully replicate the performance of the GA on the benchmark problem. The developed NN scheduler was then tested against the GA, Attribute-Oriented Induction data mining methodology and common dispatching rules on a test set of randomly generated problems. The better performance of the NN scheduler on the test problem set compared to other methods proves the feasibility of NN-based scheduling. The scalability of the NN scheduler on larger problem sizes was also found to be satisfactory in replicating the performance of the GA.


annual conference on computers | 1997

A data mining tool for learning from manufacturing systems

David A. Koonce; Cheng-Hung Fang; Shi-Chi Tsai

Abstract This paper describes a software tool, DBMine, developed to assist industrial engineers in data mining. This tool implements three common data mining methodologies: Bacons algorithm, Decision Trees and DB-Learn. Implemented in Microsoft Visual Basic 3.0©, DBMine, can utilize data in Microsoft Access 2.0© and in Watcom SQL© databases. This paper will also present an example session in which job shop sequences produced by a Genetic Algorithm are explored for regularity.


International Journal of Information Management | 2012

Initiation, Experimentation, Implementation of Innovations: The Case for Radio Frequency Identification Systems

Vic Matta; David A. Koonce; Anand Jeyaraj

This research primarily examines the stages hypothesis of the process of technology adoption by management personnel of organizations in the supply chain sector involving the Initiation, Experimentation, and Implementation stages. Further, this research examines key antecedents that may influence the various stages, including top management support, external pressure, and organization size. Using responses provided by top management representatives of 210 supply chain organizations on their organizations’ engagement with Radio Frequency Identification (RFID) technologies, this research finds that the stages hypothesis holds for RFID technologies. Specifically, organizations were seen to sequentially progress through the Initiation, Experimentation, and Implementation stages. Over 80% of organizations, who had reached the Implementation stage of adoption, had gone through the Initiation and Experimentation stages as well. Additionally, the data showed that the antecedents exerted varying levels of influences on the three stages. Top management support strongly influenced all three stages; external pressure influenced the Initiation and Implementation stages, and organizational size influenced Experimentation and Implementation stages. The paper discusses several implications for research and practice.


annual conference on computers | 1998

EQL an express query language

David A. Koonce; Lizhong Huang; Robert P. Judd

EQL, an acronym for EXPRESS Query Language, is an SQL-like query language that is used to perform ad hoc queries on data in PART 21 files. PART 21 is the clear text encoding of data in the object-oriented EXPRESS modeling format and is the format for the STEP standards like AP203. Traditional uses for STEP files have been for transferring data between similar tools and populating a data model in one tool with the data from another tool. For example, moving a part design from one CAD system to another CAD system. If however, a software system has a different view of the information, a STEP file from one system contains significant amounts of data not applicable to other system. The receiving system needs the ability to query the STEP file for the objects important to its processing. Additionally, to integrate software systems using EXPRESS and PART 21 as a data transfer mechanism; an ad hoc query language is needed to account for the data in multiple schemas that a tool might expect to encounter. EQL is designed to accept data files in schemas that are not predefined to the tool and has the ability to perform all traditional data manipulation (DML) operations: select, update, insert and delete. EQL does not support data definition (DDL) like creating new object classes.


annual conference on computers | 1997

Development of a unified data meta-model for CAD-CAPP-MRP-NC verification integration

Dinesh Dhamija; David A. Koonce; Robert P. Judd

This paper presents an architecture and a schema for the integration of heterogeneous manufacturing databases. Termed the Unified Data Meta Model (UDMM), this structure is based on common relational data modeling methods and was developed to support the integration of CAD, CAPP, NC tool path verification and MRP systems. The model was developed based upon an analysis of the existing manufacturing operations of three major corporations and the data structures of popular CAD, CAPP, MRP and NC verification software tools. Key features of the UDMM include the fact that data instantiated for communication is not persistent and it represents the union of the intersection of the information represented by the local software tools. The specification of the UDMM, and the nature of the integration architecture is such that it allows for the extension of both attributes in the model and new entities as new manufacturing functional domains are added.


annual conference on computers | 1994

Model-based manufacturing integration: a paradigm for virtual manufacturing systems engineering

Charles M. Parks; David A. Koonce; Luis Rabelo; Robert P. Judd; John A. Sauter

Abstract Current manufacturing system design methodologies produce multiple models of the eventual manufacturing system. These models reflect either the designers view of some subsystem, like materials handling, some level of abstraction, or some developmental stage in the design of the system. These models serve to break the complex system design into smaller, more manageable sized problems. This paper makes a case for the need to integrate these models before the physical system is constructed.


annual conference on computers | 1997

An integrated manufacturing systems design environment

Charles M. Parks; David A. Koonce; Robert P. Judd; Michael Johnson

Manufacturing systems design includes the determination of the, manufacturing operations, operation sequence, spatial layout, and part routings required to make a part. Currently, manufacturing systems design is done by a team of experts. Such an approach is prone to errors. Also, unfortunately, during this process there is usually no documentation of the decision making criteria; this makes successful designs hard to reproduce, especially by different experts. The work reported in this paper shows a technical approach to an Integrated Manufacturing Systems Design Workstation which uses software techniques to collect and organize the data, develop the manufacturing system design and validate the design using discrete event simulation. The workstation is composed of commercially available tools integrated together into a single seamless package.


annual conference on computers | 1995

Development of an integrated information model for computer integrated manufacturing

Pascal Dreer; David A. Koonce

Abstract The development of an FDBS is integrate existing CIM components by using a bottom-up development process. The components used in this paper do not support any kind database management. The integration of those components into a federation may be done by using two general approaches [3]: • • Migration of the files to a DBMS • • Extend the file system to support DBMS-like features Both migration and extension of the file system are costly solutions and actually depend on existing capabilities of the components. Problems may occur when the federated schema becomes too large. The schema might be split up into smaller federated schemes (loosely coupled FBDS).

Collaboration


Dive into the David A. Koonce's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arun N. Nambiar

California State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge