Cameron Hughes
Youngstown State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cameron Hughes.
computer, information, and systems sciences, and engineering | 2008
Tracey Hughes; Cameron Hughes; Alina Lazar
HTML based standards and the new XML based standards for digital transcripts generated by court recorders offer more search and analysis options than the traditional CAT (Computer Aided Transcription) technology. The LegalXml standards are promising opportunities for new methods of search for legal documents. However, the search techniques employed are still largely restricted to keyword search and various probabilistic association techniques. Rather than keyword and association searches, we are interested in semantic and inference-based search. In this paper, a process for transforming the semi-structured representation of the digital transcript to an epistemic structured representation that supports semantic and inference-based search is explored.
international conference on intelligent computing | 2009
Cameron Hughes; Tracey Hughes
Our current work is directed toward the mining and analysis of interrogative transcripts stored in digital form. In particular, we are interested in excavating the propositions and assertions that are implicit or entailed withinthe discourse of trial transcirpts,law enforcement interrogations,congressional and other types of legal hearings. We are investigating the use of epistemic agents to mine and then perform epistemological analysis that can be used as the basis for understanding the consistency, validity, and soundness of the transcript as a whole. We use interrogative entailment as a mining process to qualify the credential of the trancript. In this paper, we describe the structure of our epistemic agents andthe transcript mining process used to excavate statements that are entailed and inferredin the content of transcripts.
AI Matters | 2018
Cameron Hughes; Tracey Hughes
In the past fifteen years artificial intelligence has changed from being the preoccupation of a handful of scientists to a thriving enterprise that has captured the imagination of world leaders and ordinary citizens alike.
international conference on knowledge capture | 2015
Cameron Hughes; Tracey Hughes; Trevor watkins; James Dittrich
The proliferation of mobile computing, the Internet of Things, hosting services, and cloud computing has increased the burden of computer log file analysis for system administrators, network analysts, security analysts, and large server hosting organizations. This is due to the voluminous amounts of log entries now produced by these technologies. Since log file analysis is used to monitor and control the overall health of the computer systems behind these technologies, it has become increasingly important. The spike in the number of log entries has made real-time log analysis by human effort untenable and automated real-time log analysis essential. The log analysis process often requires human insight and judgment before a diagnosis or information synthesis becomes apparent. So while automated log analysis methods are essential, they must also be knowledge-based to be effective. In this paper, we describe a knowledge-based approach to partial computer self-regulation that uses autonomous epistemic agents to analyze and diagnose syslog entries in real-time, using a priori and posteriori knowledge of log file analysis within a hybrid deductive-abductive first order logic model. The epistemic agent uses its a priori knowledge of Unix/Linux-based computer systems in conjunction with posteriori knowledge extracted from log file entries to uncover negative and positive scenarios and take advantage of opportunities to regulate a computer systems homeostasis.
international conference on artificial intelligence and law | 2011
Cameron Hughes; Tracey Hughes; Alina Lazar
We are investigating the potential use of trial transcripts as sources of social knowledge for epistemic agents. But we are immediately faced with the reality that not all transcripts are equal. The quality of the transcripts will be partially related to the knowledge, consistency, and integrity of the individuals that testify during the course of the trial, and related to the nature and sophistication of the questions. Before we can determine whether a transcript will be useful as a knowledge source for an epistemic agent, we have to identify the consistency and quality of the knowledge present in the transcript. Coherence clusters demarcate the network of positively and negatively related propositions in the transcript. The justification clusters define the subcluster of propositions that support or justify other propositions in a coherence cluster. These clusters can be used to determine the nature of the consistency of the knowledge potentially present in the transcript. In this paper, we show how these clusters are identified using epistemic analysis. Our goals is to use these clusters as the basis for an epistemic metric used to determines the quality propositional knowledge present in a transcript.
Assembly Automation | 1998
Cameron Hughes; Tracey Hughes
IEEE Internet Computing | 2010
Cameron Hughes; Tracey Hughes
computer, information, and systems sciences, and engineering | 2010
Cameron Hughes; Tracey Hughes
Archive | 2000
Cameron Hughes; Tracey Hughes
Archive | 1999
Cameron Hughes; Tracey Hughes