Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laurence Anthony F. Park is active.

Publication


Featured researches published by Laurence Anthony F. Park.


Pattern Recognition | 2013

An effective retinal blood vessel segmentation method using multi-scale line detection

Uyen T. V. Nguyen; Alauddin Bhuiyan; Laurence Anthony F. Park; Kotagiri Ramamohanarao

Changes in retinal blood vessel features are precursors of serious diseases such as cardiovascular disease and stroke. Therefore, analysis of retinal vascular features can assist in detecting these changes and allow the patient to take action while the disease is still in its early stages. Automation of this process would help to reduce the cost associated with trained graders and remove the issue of inconsistency introduced by manual grading. Among different retinal analysis tasks, retinal blood vessel extraction plays an extremely important role as it is the first essential step before any measurement can be made. In this paper, we present an effective method for automatically extracting blood vessels from colour retinal images. The proposed method is based on the fact that by changing the length of a basic line detector, line detectors at varying scales are achieved. To maintain the strength and eliminate the drawbacks of each individual line detector, the line responses at varying scales are linearly combined to produce the final segmentation for each retinal image. The performance of the proposed method was evaluated both quantitatively and qualitatively on three publicly available DRIVE, STARE, and REVIEW datasets. On DRIVE and STARE datasets, the proposed method achieves high local accuracy (a measure to assess the accuracy at regions around the vessels) while retaining comparable accuracy compared to other existing methods. Visual inspection on the segmentation results shows that the proposed method produces accurate segmentation on central reflex vessels while keeping close vessels well separated. On REVIEW dataset, the vessel width measurements obtained using the segmentations produced by the proposed method are highly accurate and close to the measurements provided by the experts. This has demonstrated the high segmentation accuracy of the proposed method and its applicability for automatic vascular calibre measurement. Other advantages of the proposed method include its efficiency with fast segmentation time, its simplicity and scalability to deal with high resolution retinal images.


Pattern Recognition | 2011

Clustering ellipses for anomaly detection

Masud Moshtaghi; Timothy C. Havens; James C. Bezdek; Laurence Anthony F. Park; Christopher Leckie; Sutharshan Rajasegarar; James M. Keller; Marimuthu Palaniswami

Comparing, clustering and merging ellipsoids are problems that arise in various applications, e.g., anomaly detection in wireless sensor networks and motif-based patterned fabrics. We develop a theory underlying three measures of similarity that can be used to find groups of similar ellipsoids in p-space. Clusters of ellipsoids are suggested by dark blocks along the diagonal of a reordered dissimilarity image (RDI). The RDI is built with the recursive iVAT algorithm using any of the three (dis) similarity measures as input and performs two functions: (i) it is used to visually assess and estimate the number of possible clusters in the data; and (ii) it offers a means for comparing the three similarity measures. Finally, we apply the single linkage and CLODD clustering algorithms to three two-dimensional data sets using each of the three dissimilarity matrices as input. Two data sets are synthetic, and the third is a set of real WSN data that has one known second order node anomaly. We conclude that focal distance is the best measure of elliptical similarity, iVAT images are a reliable basis for estimating cluster structures in sets of ellipsoids, and single linkage can successfully extract the indicated clusters.


Information Retrieval | 2010

Click-based evidence for decaying weight distributions in search effectiveness metrics

Yuye Zhang; Laurence Anthony F. Park; Alistair Moffat

Search effectiveness metrics are used to evaluate the quality of the answer lists returned by search services, usually based on a set of relevance judgments. One plausible way of calculating an effectiveness score for a system run is to compute the inner-product of the run’s relevance vector and a “utility” vector, where the ith element in the utility vector represents the relative benefit obtained by the user of the system if they encounter a relevant document at depth i in the ranking. This paper uses such a framework to examine the user behavior patterns—and hence utility weightings—that can be inferred from a web query log. We describe a process for extrapolating user observations from query log clickthroughs, and employ this user model to measure the quality of effectiveness weighting distributions. Our results show that for measures with static distributions (that is, utility weighting schemes for which the weight vector is independent of the relevance vector), the geometric weighting model employed in the rank-biased precision effectiveness metric offers the closest fit to the user observation model. In addition, using past TREC data as to indicate likelihood of relevance, we also show that the distributions employed in the BPref and MRR metrics are the best fit out of the measures for which static distributions do not exist.


IEEE Transactions on Knowledge and Data Engineering | 2004

Fourier domain scoring: a novel document ranking method

Laurence Anthony F. Park; Kotagiri Ramamohanarao; Marimuthu Palaniswami

Current document retrieval methods use a vector space similarity measure to give scores of relevance to documents when related to a specific query. The central problem with these methods is that they neglect any spatial information within the documents in question. We present a new method, called Fourier Domain Scoring (FDS), which takes advantage of this spatial information, via the Fourier transform, to give a more accurate ordering of relevance to a document set. We show that FDS gives an improvement in precision over the vector space similarity measures for the common case of Web like queries, and it gives similar results to the vector space measures for longer queries.


ACM Transactions on Information Systems | 2005

A novel document retrieval method using the discrete wavelet transform

Laurence Anthony F. Park; Kotagiri Ramamohanarao; Marimuthu Palaniswami

Current information retrieval methods either ignore the term positions or deal with exact term positions; the former can be seen as coarse document resolution, the latter as fine document resolution. We propose a new spectral-based information retrieval method that is able to utilize many different levels of document resolution by examining the term patterns that occur in the documents. To do this, we take advantage of the multiresolution analysis properties of the wavelet transform. We show that we are able to achieve higher precision when compared to vector space and proximity retrieval methods, while producing fast query times and using a compact index.


international acm sigir conference on research and development in information retrieval | 2009

Score adjustment for correction of pooling bias

William Webber; Laurence Anthony F. Park

Information retrieval systems are evaluated against test collections of topics, documents, and assessments of which documents are relevant to which topics. Documents are chosen for relevance assessment by pooling runs from a set of existing systems. New systems can return unassessed documents, leading to an evaluation bias against them. In this paper, we propose to estimate the degree of bias against an unpooled system, and to adjust the systems score accordingly. Bias estimation can be done via leave-one-out experiments on the existing, pooled systems, but this requires the problematic assumption that the new system is similar to the existing ones. Instead, we propose that all systems, new and pooled, be fully assessed against a common set of topics, and the bias observed against the new system on the common topics be used to adjust scores on the existing topics. We demonstrate using resampling experiments on TREC test sets that our method leads to a marked reduction in error, even with only a relatively small number of common topics, and that the error decreases as the number of topics increases.


very large data bases | 2009

Efficient storage and retrieval of probabilistic latent semantic information for information retrieval

Laurence Anthony F. Park; Kotagiri Ramamohanarao

Probabilistic latent semantic analysis (PLSA) is a method for computing term and document relationships from a document set. The probabilistic latent semantic index (PLSI) has been used to store PLSA information, but unfortunately the PLSI uses excessive storage space relative to a simple term frequency index, which causes lengthy query times. To overcome the storage and speed problems of PLSI, we introduce the probabilistic latent semantic thesaurus (PLST); an efficient and effective method of storing the PLSA information. We show that through methods such as document thresholding and term pruning, we are able to maintain the high precision results found using PLSA while using a very small percent (0.15%) of the storage space of PLSI.


web intelligence | 2007

Personalized PageRank for Web Page Prediction Based on Access Time-Length and Frequency

Yong Zhen Guo; Kotagiri Ramamohanarao; Laurence Anthony F. Park

Web page prefetching techniques are used to address the access latency problem of the Internet. To perform successful prefetching, we must be able to predict the next set of pages that will be accessed by users. The PageRank algorithm used by Google is able to compute the popularity of a set of Web pages based on their link structure. In this paper, a novel PageRank-like algorithm is proposed for conducting Web page prediction. Two biasing factors are adopted to personalize PageRank, so that it favors the pages that are more important to users. One factor is the length of time spent on visiting a page and the other is the frequency that a page was visited. The experiments conducted show that using these two factors simultaneously to bias PageRank results in more accurate Web page prediction than other methods that use only one of these two factors.Web page prefetching techniques are used to address the access latency problem of the Internet. To perform successful prefetching, we must be able to predict the next set of pages that will be accessed by users. The PageRank algorithm used by Google is able to compute the popularity of a set of Web pages based on their link structure. In this paper, a novel PageRank-like algorithm is proposed for conducting Web page prediction. Two biasing factors are adopted to personalize PageRank, so that it favors the pages that are more important to users. One factor is the length of time spent on visiting a page and the other is the frequency that a page was visited. The experiments conducted show that using these two factors simultaneously to bias PageRank results in more accurate Web page prediction


knowledge discovery and data mining | 2007

Query expansion using a collection dependent probabilistic latent semantic thesaurus

Laurence Anthony F. Park; Kotagiri Ramamohanarao

Many queries on collections of text documents are too short to produce informative results. Automatic query expansion is a method of adding terms to the query without interaction from the user in order to obtain more refined results. In this investigation, we examine our novel automatic query expansion method using the probabilistic latent semantic thesaurus, which is based on probabilistic latent semantic analysis. We show how to construct the thesaurus by mining text documents for probabilistic term relationships, and we show that by using the latent semantic thesaurus, we can overcome many of the problems associated to latent semantic analysis on large document sets which were previously identified. Experiments using TREC document sets show that our term expansion method out performs the popular probabilistic pseudorelevance feedback method by 7.3%.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

A novel document ranking method using the discrete cosine transform

Laurence Anthony F. Park; Marimuthu Palaniswami; Kotagiri Ramamohanarao

We propose a new spectral text retrieval method using the discrete cosine transform (DCT). By taking advantage of the properties of the DCT and by employing the fast query and compression techniques found in vector space methods (VSM), we show that we can process queries as fast as VSM and achieve a much higher precision.

Collaboration


Dive into the Laurence Anthony F. Park's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Glenn Stone

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simeon J. Simoff

University of Western Sydney

View shared research outputs
Researchain Logo
Decentralizing Knowledge