Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ronald N. Kostoff is active.

Publication


Featured researches published by Ronald N. Kostoff.


IEEE Transactions on Engineering Management | 2001

Science and technology roadmaps

Ronald N. Kostoff; Robert R. Schaller

Science and technology (ST ST enhancing communications among researchers, technologists, product managers, suppliers, users, and other stakeholders; identifying gaps and opportunities in ST and identifying obstacles to rapid and low-cost product development. S&T managers also use roadmaps to help identify those S&T areas that have high potential promise, and to accelerate the transfer of the S&T to eventual products. However, there has been little attention paid to the practice of roadmapping in the published literature. This paper is a first attempt to bring some common definition to roadmapping practices and display the underlying unity of seemingly fragmented roadmap approaches. The paper begins with generic roadmap definitions, including a taxonomy of roadmaps that attempts to better classify and unify the broad spectrum of roadmap objectives and uses. Characteristics of retrospective and prospective roadmaps are then identified and analyzed, as well as summary characteristics of bibliometric-based S&T mapping techniques. The roadmap construction process, including fundamental principles for constructing high-quality roadmaps, is presented in detail.


Technological Forecasting and Social Change | 2004

Disruptive technology roadmaps

Ronald N. Kostoff; Robert Boylan; Gene R. Simons

Abstract Disruptive technologies create growth in the industries they penetrate or create entirely new industries through the introduction of products and services that are dramatically cheaper, better, and more convenient. These disruptive technologies often disrupt workforce participation by allowing technologically unsophisticated individuals to enter and become competitive in the industrial workforce. Disruptive technologies offer a revolutionary change in the conduct of processes or operations. Disruptive technologies can evolve from the confluence of seemingly diverse technologies or can be a result of an entirely new technological investigation. Existing planning processes are notoriously poor in identifying the mix of sometimes highly disparate technologies required to address the multiple performance objectives of a particular niche in the market. For a number of reasons, especially the inability to look beyond short-term profitability, and the risk/return tradeoff of longer term projects, it is suggested that current strategic planning and management processes promote sustaining technologies at the expense of disruptive technologies. We propose a systematic approach to identify disruptive technologies that is realistic and operable and takes advantage of the text mining literature. This literature-based discovery process is especially useful in identifying potential disruptive technologies that may require the input from many diverse technological and management areas. We believe that this process holds great potential for identifying projects with a higher probability of downstream success. Further, we suggest a process to take the identified potential disruptive technology from the “idea stage” through to the development of a potentially feasible product for the market. This second stage makes use of workshops and roadmapping to codify the ideas of technological and management experts, who were identified in the literature-based discovery stage. Our goal is to describe and explain the pragmatic steps suggested by our innovative and practical process. The proposed process could identify technologies whose eventual development and application to specific problems would generate innovative products. The goal is to isolate technologies that have the potential to redefine an industry, or alternatively, have the potential to create an entirely new industrial setting. Use the text-mining component of literature-based discovery to identify both the technical disciplines that are likely candidates for disruptive technological products, and experts in these critical technical and managerial disciplines. While we know that this is but one way to investigate nascent disruptive technologies we feel it is imperative that the representatives of these potentially critical technical disciplines are included in the roadmap development process, either as implementers or as consultants. Every firm is looking for “the next great thing”. Literature-based discovery offers a starting point for identifying at least a portion of the major contributory technical and managerial disciplines necessary for potential disruptive technologies and discontinuous innovations. Combining literature-based discovery with a practical workshop/roadmap process dramatically enhances the likelihood of success.


Scientometrics | 1998

The use and misuse of citation analysis in research evaluation

Ronald N. Kostoff

AbstractThe present paper addresses some of the many possible uses of citations, including bookmark, intellectual heritage, impact tracker, and self-serving purposes. The main focus is on the applicability of citation analysis as an impact or quality measure. If a papers bibliography is viewed as consisting of a directed (research impact or quality) component related to intellectual heritage and random components related to specific self-interest topics, then for large numbers of citations from many different citing paper, the most significant intellectual heritage (research impact or quality) citations will aggregate and the random author-specific self-serving citations will be scattered and not accumulate. However, there are at least two limitations to this model of citation analysis for stand-alone use as a measure of research impact of quality. First, the reference to intellectual heritage could be positive or negative. Second, there could be systemic biases which affect the aggregate results, and one of these, the “Pied Piper Effect”, is described in detail. Finally, the results of a short citation study comparing Russian and American papers in different technical fields are presented. The questions raised in interpreting this data highlight a few of the difficulties in attempting to interpret citation results without supplementary information. Leydesdorff (Leydesdorff, 1998) addresses the history of citations and citation analysis, and the transformation of a reference mechanism into a purportedly quantitive measure of research impact/quality. The present paper examines different facets of citations and citation analysis, and discusses the validity of citation analysis as a useful measure of research impact/quality.


intelligent information systems | 2000

Textual Data Mining to Support Science and Technology Management

Paul Losiewicz; Douglas W. Oard; Ronald N. Kostoff

This paper surveys applications of data mining techniques to large text collections, and illustrates how those techniques can be used to support the management of science and technology research. Specific issues that arise repeatedly in the conduct of research management are described, and a textual data mining architecture that extends a classic paradigm for knowledge discovery in databases is introduced. That architecture integrates information retrieval from text collections, information extraction to obtain data from individual texts, data warehousing for the extracted data, data mining to discover useful patterns in the data, and visualization of the resulting patterns. At the core of this architecture is a broad view of data mining—the process of discovering patterns in large collections of data—and that step is described in some detail. The final section of the paper illustrates how these ideas can be applied in practice, drawing upon examples from the recently completed first phase of the textual data mining program at the Office of Naval Research. The paper concludes by identifying some research directions that offer significant potential for improving the utility of textual data mining for research management applications.


Technological Forecasting and Social Change | 2001

Text mining using database tomography and bibliometrics: A review ☆

Ronald N. Kostoff; Darrell Ray Toothman; Henry J. Eberhart; James A. Humenik

Abstract Database tomography (DT) is a textual database analysis system consisting of two major components: (1) algorithms for extracting multiword phrase frequencies and phrase proximities (physical closeness of the multiword technical phrases) from any type of large textual database, to augment (2) interpretative capabilities of the expert human analyst. DT has been used to derive technical intelligence from a variety of textual database sources, most recently the published technical literature as exemplified by the Science Citation Index (SCI) and the Engineering Compendex (EC). Phrase frequency analysis (the occurrence frequency of multiword technical phrases) provides the pervasive technical themes of the topical databases of interest, and phrase proximity analysis provides the relationships among the pervasive technical themes. In the structured published literature databases, bibliometric analysis of the database records supplements the DT results by identifying: the recent most prolific topical area authors; the journals that contain numerous topical area papers; the institutions that produce numerous topical area papers; the keywords specified most frequently by the topical area authors; the authors whose works are cited most frequently in the topical area papers; and the particular papers and journals cited most frequently in the topical area papers. This review paper summarizes: (1) the theory and background development of DT; (2) past published and unpublished literature study results; (3) present application activities; (4) potential expansion to new DT applications. In addition, application of DT to technology forecasting is addressed.


Information Processing and Management | 1998

Database tomography for technical intelligence: a roadmap of the near-earth space science and technology literature

Ronald N. Kostoff; Henry J. Eberhart; Darrell Ray Toothman

Abstract Database Tomography (DT) is a system which includes algorithms for extracting multi-word phrase frequencies and performing phrase proximity analyses (relating physical closeness of the multi-word technical phrases to thematic relationships) on any type of large textual database. As an illustration of the DT process applied to the published literature, DT was used to derive technical intelligence from a near-earth space (NES) database derived from the Science Citation Index and the Engineering Compendex. Phrase frequency analysis (the occurrence frequency of multi-word technical phrases) provided the pervasive technical themes of the space database, and the phrase proximity analysis provided the relationships among the pervasive technical themes. Bibliometric analysis of the NES literature supplemented the DT results by identifying: the recent most prolific NES authors; the journals which contain numerous NES papers; the institutions which produce numerous NES papers; the keywords most frequently specified by the NES authors; the authors whose works are cited most frequently in the NES papers; and the particular papers and journals cited most frequently in the NES papers.


Scientometrics | 2007

Global nanotechnology research metrics

Ronald N. Kostoff; Raymond G. Koytcheff; Clifford G. Y. Lau

Text mining was used to extract technical intelligence from the open source global nanotechnology and nanoscience research literature. An extensive nanotechnology/nanoscience-focused query was applied to the Science Citation Index/Social Science Citation Index (SCI/SSCI) databases. The nanotechnology/nanoscience research literature infrastructure (prolific authors, key journals/institutions/countries, most cited authors/journals/documents) was obtained using bibliometrics. A novel addition was the use of institution and country auto-correlation maps to show co-publishing networks among institutions and among countries, and the use of institution-phrase and country-phrase cross-correlation maps to show institution networks and country networks based on use of common terminology (proxy for common interests). The use of factor matrices quantified further the strength of the linkages among institutions and among countries, and validated the co-publishing networks shown graphically on the maps.


Scientometrics | 2002

Citation analysis of research performer quality

Ronald N. Kostoff

Background: Citation analysis for evaluative purposes typically requires normalization against some control group of similar papers. Selection of this control group is an open question. Objectives: Gain a better understanding of control group requirements for credible normalization. Approach: Performed citation analysis on prior publications of two proposing research units, to help estimate team research quality. Compared citations of each units publications to citations received by thematically and temporally similar papers. Results: Identification of thematically similar papers was very complex and labor intensive, even with relatively few control papers selected. Conclusions: A credible citation analysis for determining performer or team quality should have the following components: – Multiple technical experts to average out individual bias and subjectivity; – A process for comparing performer or team output papers with a normalization base of similar papers; – A process for retrieving a substantial fraction of candidate normalization base papers; Manual evaluation of many candidate normalization base papers to obtain high thematic similarity and statistical representation.


Technovation | 1999

Science and technology innovation

Ronald N. Kostoff

Abstract This paper describes two novel complementary approaches for systematically enhancing the process of innovation and discovery. One approach is workshop-based and the other approach is literature-based. Both approaches have the common feature of exploring knowledge from very disparate technical disciplines and technologies, and transferring insights and understanding from one or more disparate technical areas to another. It is highly recommended that the approaches be combined into a single process. The integrated approach has the potential to be a major breakthrough for the systematic promotion of innovation and discovery.


Journal of the Association for Information Science and Technology | 1999

Hypersonic and supersonic flow roadmaps using bibliometrics and database tomography

Ronald N. Kostoff; Henry J. Eberhart; Darrell Ray Toothman

Database Tomography (DT) is a textual database analysis system consisting of two major components: 1) Algorithms for extracting multiword phrase frequencies and phrase proximities (physical closeness of the multiword technical phrases) from any type of large textual database, to augment 2) interpretative capabilities of the expert human analyst. DT was used to derive technical intelligence from a hypersonic/supersonic flow (HSF) database derived from the Science Citation Index and the Engineering Compendex. Phrase frequency analysis by the technical domain expert provided the pervasive technical themes of the HSF database, and the phrase proximity analysis provided the relationships among the pervasive technical themes. Bibliometric analysis of the HSF literature supplemented the DT results with author/journal/institution publication and citation data. Comparisons of HSF results with past analyses of similarly structured near‐earth space and Chemistry databases are made. One important finding is that many of the normalized bibliometric distribution functions are extremely consistent across these diverse technical domains.

Collaboration


Dive into the Ronald N. Kostoff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dustin Johnson

Office of Naval Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael B. Briggs

Marine Corps Warfighting Laboratory

View shared research outputs
Top Co-Authors

Avatar

Joel A. Block

Rush University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Jesse A. Stump

Office of Naval Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge