Martin Frické
University of Arizona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martin Frické.
Journal of the Association for Information Science and Technology | 2004
Martin Frické; Don Fallis
The Internet is increasingly being used as a source of reference information. Internet users need to be able to distinguish accurate information from inaccurate information. Toward this end, information professionals have published checklists for evaluating information. However, such checklists can be effective only if the proposed indicators of accuracy really do indicate accuracy. This study implements a technique for testing such indicators of accuracy and uses it to test indicators of accuracy for answers to ready reference questions. Many of the commonly proposed indicators of accuracy (e.g., that the Web site does not contain advertising) were not found to be correlated with accuracy. However, the link structure of the Internet can be used to identify Web sites that are more likely to contain accurate reference information.
association for information science and technology | 2015
Martin Frické
The article considers whether Big Data, in the form of data‐driven science, will enable the discovery, or appraisal, of universal scientific theories, instrumentalist tools, or inductive inferences. It points out, initially, that such aspirations are similar to the now‐discredited inductivist approach to science. On the positive side, Big Data may permit larger sample sizes, cheaper and more extensive testing of theories, and the continuous assessment of theories. On the negative side, data‐driven science encourages passive data collection, as opposed to experimentation and testing, and hornswoggling (“unsound statistical fiddling”). The roles of theory and data in inductive algorithms, statistical modeling, and scientific discoveries are analyzed, and it is argued that theory is needed at every turn. Data‐driven science is a chimera.
The Library Quarterly | 2000
Martin Frické; Kay Mathiesen; Don Fallis
The American Library Associations (ALAs) Library Bill of Rights is based on a foundation of ethical presuppositions. In this article, these presuppositions are spelled out and critically examined in light of several ethical theories (for example, utilitarianism, natural rights theory, and social contract theory). We suggest that social contract theory provides the strongest argument for a right to access to information (and to have that information provided by public libraries). We argue, however, that the right to access to information is not unlimited. Limiting access (including censorship) is appropriate, for example, when such a limitation is necessary to protect a more fundamental right. Finally, we argue that the ALAs advocacy of an unlimited right to access is based on a mistaken understanding of what follows from the fact that all of our judgments are fallible.
Journal of the Association for Information Science and Technology | 1997
Martin Frické
A suggestion is made regarding the nature of information: That the information in a theory be evaluated by measuring either its distance from the perfect theory or by measuring its distance from the right answer to the information seeking question that led to it. The measures here are provided by the Tichy-Hilpinen-Oddie-Niiniluoto-likeness measures which were introduced in the context of the philosophical problem of verisimilitude. One feature of this suggestion that differentiates it from most theories of information is that it does not use or depend on probabilities or uncertainty. Another unusual feature of it is that it permits false views or theories to possess information.
Journal of Education for Library and Information Science | 2002
Don Fallis; Martin Frické
The Internet has become an important source of health information. However, several empirical studies indicate that there is a significant amount of inaccurate health information on the Internet. Thus, it is important that Internet users be able to distinguish the accurate information from the inaccurate information. Information professionals have developed checklists for evaluating the quality of health information to assist Internet users in this regard. Such checklists can only be effective if the proposed indicators of accuracy really do indicate accuracy. This article reports on two empirical studies that implement an effective technique for identifying such indicators of accuracy. In particular, one of these studies indicates that the HONcode logo is more likely to be displayed on web sites that contain accurate health information. Many commonly proposed indicators of accuracy (e.g., the author having medical credentials, currency, lack of advertising) were not found to be correlated with accuracy.
Information Processing and Management | 1998
Martin Frické
Tague-Sutcliffe offers a theory to permit the measurement of information services by means of the user-centered notion of the subjective information associated with the interaction between a user and a record on an occasion. Some suggestions are made to improve the theorys foundations.
The British Journal for the Philosophy of Science | 1997
Martin Frické
Hyperproof is one of the first systems to permit and encourage reasoning across heterogeneous media. Its advocates argue that it has merits over and above the obvious pragmatic and cognitive ones. This paper suggests analysing Hyperproof-like systems in terms of languages interpreted over a common conceptual scheme and translation relations between logical expressions in such languages. This analysis shows that, despite initial appearances, Hyperproof has no real theoretical merits apart from its admittedly important pragmatic advantages.
association for information science and technology | 2017
Martin Frické
The book is primarily a how-to book: if you are an adherent, practitioner, advocate, or would-be user, of the BFO, then this book is to help you building your own ontology or ontologies using BFO. The book is very well written and clear. In its opening chapters, it urges sensible default philosophical positions such as fallibilism and realism, and it educates, and cautions, on logical and philosophical mistakes over representing data and information, mistakes that are commonly made by folk not trained in logic or philosophy. It has extensive suggestions throughout on further reading. As the book explains in its Introduction, the wider problem that BFO takes on is that of combining the diversity of information, in the same or similar scientific domains, with the use of computers for storing, retrieving, and reasoning with that information. In the absence of prophylactics like BFO, there is the grim prospect of information siloes where there is scientific information in different stores, and the stores cannot talk to each other, and, possibly, outsiders cannot interrogate any of them. The solution that BFO proposes is that of the use of “ontologies,” and these are intended to provide a semantics for the subject domains and thus to go beyond, for example, controlled vocabularies. An
Journal of Documentation | 2013
Martin Frické
Purpose – The purpose of this paper is to clarify the ontological and epistemological basis of classification.Design/methodology/approach – Attention is drawn to a 1785 article on abstraction by Thomas Reid and the contents and theories of the article are explained. The Reid article both provides a sound approach to classification and is interesting historically as it influenced the classification pioneer Charles Ammi Cutter who, in turn, is responsible for much of the modern theory of functional bibliography. Reids account is supplemented by brief descriptions of fallibilism and fuzziness. An associated view, Aristotelian essentialism is explained and criticized. Some observations are offered on the role of prototypes in classification and on the monothetic‐polythetic distinction.Findings – Reids theories, suitably embedded in fallibilism and augmented with a respect for truth, provide a sound ontological and epistemological basis for classification.Originality/value – Reids essay, together with an ap...
Archive | 2012
Martin Frické
When computers answer our questions in mathematics and logic they need also to be able to supply justification and explanatory insight. Typical theorem provers do not do this. The paper focuses on tableau theorem provers for First Order Predicate Calculus. The paper introduces a general construction and a technique for converting the tableau data structures of these to human friendly linear proofs using any familiar rule set and ‘laws of thought’. The construction uses a type of tableau in which only leaf nodes are extended. To produce insightful proofs, improvements need to be made to the intermediate output. Dependency analysis and refinement, ie compilation of proofs, can produce benefits. To go further, the paper makes other suggestions including a perhaps surprising one: the notion of best proof or insightful proof is an empirical matter. All possible theorems, or all possible proofs, distribute evenly, in some sense or other, among the possible uses of inference steps. However, with the proofs of interest to humans this uniformity of distribution does not hold. Humans favor certain inferences over others, which are structurally very similar. The author’s research has taken many sample questions and proofs from logic texts, scholastic tests, and similar sources, and analyzed the best proofs for them (‘best’ here usually meaning shortest). This empirical research gives rise to some suggestions on heuristic. The general point is: humans are attuned to certain forms inference, empirical research can tell us what those are, and that empirical research can educate as to how tableau theorem provers, and their symbiotic linear counterparts, should run. In sum, tableau theorem provers, coupled with transformations to linear proofs and empirically sourced heuristic, can provide transparent and accessible theorem proving.