Kenneth Reynolds
University of Central Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kenneth Reynolds.
international conference on tools with artificial intelligence | 2007
Anna Koufakou; Enrique Ortiz; Michael Georgiopoulos; Georgios C. Anagnostopoulos; Kenneth Reynolds
Outlier detection has received significant attention in many applications, such as detecting credit card fraud or network intrusions. Most existing research focuses on numerical datasets, and cannot directly apply to categorical sets where there is little sense in calculating distances among data points. Furthermore, a number of outlier detection methods require quadratic time with respect to the dataset size and usually multiple dataset scans. These characteristics are undesirable for large datasets, potentially scattered over multiple distributed sites. In this paper, we introduce Attribute Value Frequency (A VF), a fast and scalable outlier detection strategy for categorical data. A VF scales linearly with the number of data points and attributes, and relies on a single data scan. AVF is compared with a list of representative outlier detection approaches that have not been contrasted against each other. Our proposed solution is experimentally shown to be significantly faster, and as effective in discovering outliers.
computational intelligence | 2005
Olcay Kursun; Kenneth Reynolds; Ronald Eaglin; Bing Chen; Michael Georgiopoulos
Auto theft is the most expensive property crime that is on the rise across the nation. The prediction of auto drop-off locations can increase the probability of offender apprehension. For successful prediction, first the patterns of thefts are identified. Then, a prototype expert system successfully identified embedded drop-off location clusters that were previously unknown to investigators. The system was developed using the expert knowledge of auto theft investigators along with spatial and temporal auto theft event data. Drop-off clusters were identified and validated. A map interface allows the user to visualize the feature clusters and produce detailed reports. Such GIS applications give us the ability to attain a geographical perspective of incidents within the community, thus help law enforcement officers discover the patterns of incidents and take necessary measures to prevent them
intelligence and security informatics | 2006
Olcay Kursun; Kenneth Reynolds; Oleg V. Favorov
A method is developed for extraction of robust human facial features that can be used on never-before-seen individuals in homeland security tasks such as human tracking or matching photos of dead against missing individuals (e.g. recent Asian tsunami aftermath).
intelligence and security informatics | 2008
Olcay Kursun; Michael Georgiopoulos; Kenneth Reynolds
As it is the case with all database systems that collect and store data, data input errors occur resulting in less than perfect data integrity, or what is common referred to as the “dirtydata” problem. American investigators are not familiar with many foreign names such as Zacarias Moussaoui. If the first or last name is spelled incorrectly during a query, the person record could be missed. Individuals who are chronic offenders and those who are attempting to evade detection use alias. Moussaoui is also known as Shaqil and Abu Khalid al Sahrawi. Unless smart analytical tools are available for effective name matching where data integrity conditions, challenging name spellings, and deliberate obfuscation are present, the likelihood of missing a critical record is high. This paper addresses some of the problems stemming from unreliable and inaccurate law enforcement data. Although the ideas proposed are using “name data” as an illustration of how to deal with dirty data, the proposed approaches will be extended to other types of dirty data in the law enforcement databases, such as addresses, stolen item/article names/descriptions/brand names, etc.
International Journal on Artificial Intelligence Tools | 2006
Olcay Kursun; Anna Koufakou; Abhijit Wakchaure; Michael Georgiopoulos; Kenneth Reynolds; Ronald Eaglin
The obvious need for using modern computer networking capabilities to enable the effective sharing of information has resulted in data-sharing systems, which store, and manage large amounts of data. These data need to be effectively searched and analyzed. More specifically, in the presence of dirty data, a search for specific information by a standard query (e.g., search for a name that is misspelled or mistyped) does not return all needed information, as required in homeland security, criminology, and medical applications, amongst others. Different techniques, such as soundex, phonix, n-grams, edit-distance, have been used to improve the matching rate in these name-matching applications. These techniques have demonstrated varying levels of success, but there is a pressing need for name matching approaches that provide high levels of accuracy in matching names, while at the same time maintaining low computational complexity. In this paper, such a technique, called ANSWER, is proposed and its characteristics are discussed. Our results demonstrate that ANSWER possesses high accuracy, as well as high speed and is superior to other techniques of retrieving fuzzy name matches in large databases.
Archive | 2013
Olga Semukhina; Kenneth Reynolds
Archive | 2013
Olga Semukhina; Kenneth Reynolds
Archive | 2013
Olga Semukhina; Kenneth Reynolds
Archive | 2013
Olga Semukhina; Kenneth Reynolds
Archive | 2013
Olga Semukhina; Kenneth Reynolds