Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin Grant is active.

Publication


Featured researches published by Kevin Grant.


non-photorealistic animation and rendering | 2008

Stylized black and white images from photographs

David Mould; Kevin Grant

Halftoning algorithms attempt to match the tone of an input image despite lower color resolution in the output. However, in some artistic media and styles, tone matching is not at all the goal; rather, details are either portrayed sharply or omitted entirely. In this paper, we present an algorithm for abstracting arbitrary input images into black and white images. Our goal is to preserve details while as much as possible producing large regions of solid color in the output. We present two methods based on energy minimization, using loopy belief propagation and graph cuts, but it is difficult to devise a single energy term that both sufficiently promotes coherence and adequately preserves details. We next propose a third algorithm separating these two concerns. Our third algorithm involves composing a base layer, consisting of large flat-colored regions, with a detail layer, containing the small high-contrast details. The base layer is computed with energy minimization, while local adaptive thresholding gives the detail layer. The final labeling is tidied by removing small components, vectorizing, and smoothing the region boundaries. The output images satisfy our goal of high spatial coherence with detail preservation.


australasian joint conference on artificial intelligence | 2005

Conditioning graphs: practical structures for inference in bayesian networks

Kevin Grant; Michael C. Horsch

Programmers employing inference in Bayesian networks typically rely on the inclusion of the model as well as an inference engine into their application. Sophisticated inference engines require non-trivial amounts of space and are also difficult to implement. This limits their use in some applications that would otherwise benefit from probabilistic inference. This paper presents a system that minimizes the space requirement of the model. The inference engine is sufficiently simple as to avoid space-limitation and be easily implemented in almost any environment. We show a fast, compact indexing structure that is linear in the size of the network. The additional space required to compute over the model is linear in the number of variables in the network.


International Journal of Approximate Reasoning | 2009

Methods for constructing balanced elimination trees and other recursive decompositions

Kevin Grant; Michael C. Horsch

An elimination tree is a form of recursive factorization for Bayesian networks. Elimination trees can be used as the basis for a practical implementation of Bayesian network inference via conditioning graphs. The time complexity for inference in elimination trees has been shown to be O(nexp(d)), where d is the height of the elimination tree. In this paper, we demonstrate two new heuristics for building small elimination trees. We also demonstrate a simple technique for deriving elimination trees from Darwiche et al.s dtrees, and vice versa. We show empirically that our heuristics, combined with a constructive process for building elimination trees, produces the smaller elimination trees than previous methods.


conference on future play | 2008

Combining heuristic and landmark search for path planning

Kevin Grant; David Mould

We propose a hybridization of heuristic search and the LPI algorithm. Our approach uses heuristic search to find paths to landmarks, and employs a small amount of landmark information to correct itself when the heuristic search deviates from the shortest path. The use of the heuristic allows lower memory usage than LPI, while the use of the landmarks permits the algorithm to operate effectively even with a poor heuristic. When the heuristic accuracy is very high, the algorithm tends towards greedy search; when the heuristic accuracy is low, the algorithm tends towards LPI. Experiments show that the memory usage of LPI can be reduced by more than half while preserving the accuracy of the solutions.


canadian conference on artificial intelligence | 2006

Exploiting dynamic independence in a static conditioning graph

Kevin Grant; Michael C. Horsch

A conditioning graph (CG) is a graphical structure that attempt to minimize the implementation overhead of computing probabilities in belief networks. A conditioning graph recursively factorizes the network, but restricting each decomposition to a single node allows us to store the structure with minimal overhead, and compute with a simple algorithm. This paper extends conditioning graphs with optimizations that effectively reduce the height of the CG, thus reducing time complexity exponentially, while increasing the storage requirements by only a constant factor. We conclude that CGs are frequently as efficient as any other exact inference method, with the advantage of being vastly superior to VE and JT in terms of space complexity, and far simpler to implement.


Journal of Automated Reasoning | 2010

Preface: Special Issue on Uncertain Reasoning

Yang Xiang; Kevin Grant

This special issue contains selected and extended papers from the Uncertain Reasoning Special Track at the 2008 International Florida Artificial Intelligence Research Society Conference (FLAIRS). The Uncertain Reasoning Special Track is the oldest track in FLAIRS conferences, running annually since 1996. Over the years, the track has been established as a key event where researchers on broad issues related to reasoning under uncertainty, following a variety of alternative paradigms, formalisms and methodologies, come together to exchange insights and to influence each other’s work. The 2008 track also features a special session dedicated to Henry E. Kyburg Jr., a renowned, respected professor of computer science and philosophy, and a founding member and long time contributor of the track, who passed away October 30, 2007. During a 43 year academic career, he made significant contributions to the field of uncertain reasoning. He published numerous articles and books on topics such as inductive logic, statistical reasoning, probability, and epistemology. Henry loved to see a thousand flowers bloom. He was interested in belief measures, uncertain inference, nonmonotonicity and combining logic with probability. Papers included in this special issue are assembled in this very spirit. In this special issue, Palacios-Alonso, Brizuela, and Sucar study dynamic naive Bayesian classifiers. They propose an evolutionary optimization algorithm for designing such classifiers. The design methodology is applied to hand gesture recognition. Experimental results show that the evolved network has higher average classification accuracy than the basic dynamic naive Bayesian classifier and a hidden Markov model.


canadian conference on artificial intelligence | 2012

Exploiting the probability of observation for efficient bayesian network inference

Fouzia Mousumi; Kevin Grant

It is well-known that the observation of a variable in a Bay-esian network can affect the effective connectivity of the network, which in turn affects the efficiency of inference. Unfortunately, the observed variables may not be known until runtime, which limits the amount of compile-time optimization that can be done in this regard. In this paper, we consider how to improve inference when we know the likelihood of a variable being observed. We show how these probabilities of observation can be exploited to improve existing heuristics for choosing elimination orderings for inference. Empirical tests over a set of benchmark networks using the Variable Elimination algorithm show reductions of up to 50%, 70%, and 55% in multiplications, summations, and runtime, respectively.


International Journal of Approximate Reasoning | 2012

Efficient indexing methods for recursive decompositions of Bayesian networks

Kevin Grant

We consider efficient indexing methods for conditioning graphs, which are a form of recursive decomposition for Bayesian networks. We compare two well-known methods for indexing, a top-down method and a bottom-up method, and discuss the redundancy that each of these suffer from. We present a new method for indexing that combines the advantages of each model in order to reduce this redundancy. We also introduce the concept of an update manager, which is a node in the conditioning graph that controls when other nodes update their current index. Empirical evaluations over a suite of standard test networks show a considerable reduction both in the amount of indexing computation that takes place, and the overall runtime required by the query algorithm.


mexican international conference on artificial intelligence | 2010

On the structure of elimination trees for Bayesian network inference

Kevin Grant; Keilan Scholten

We present an optimization to elimination tree inference in Bayesian networks through the use of unlabeled nodes, or nodes that are not labeled with a variable from the Bayesian network. Through the use of these unlabeled nodes, we are able to restructure these trees, and reduce the amount of computation performed during the inference process. Empirical tests show that the algorithm can reduce multiplications by up to 70%, and overall runtime by up to 50%.


visualization and data analysis | 2008

Extending the dimensionality of flatland with attribute view probabilistic models

Eric Neufeld; Mikelis G. Bickis; Kevin Grant

In much of Bertins Semiology of Graphics, marks representing individuals are arranged on paper according to their various attributes (components). Paper and computer monitors can conveniently map two attributes to width and height, and can map other attributes into nonspatial dimensions such as texture, or colour. Good visualizations exploit the human perceptual apparatus so that key relationships are quickly detected as interesting patterns. Graphical models take a somewhat dual approach with respect to the original information. Components, rather than individuals, are represented as marks. Links between marks represent conceptually simple, easily computable, and typically probabilistic relationships of possibly varying strength, and the viewer studies the diagram to discover deeper relationships. Although visually annotated graphical models have been around for almost a century, they have not been widely used. We argue that they have the potential to represent multivariate data as generically as pie charts represent univariate data. The present work suggests a semiology for graphical models, and discusses the consequences for information visualization.

Collaboration


Dive into the Kevin Grant's collaboration.

Top Co-Authors

Avatar

Michael C. Horsch

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Neufeld

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar

Fouzia Mousumi

University of Lethbridge

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mikelis G. Bickis

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge