Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joe Michael Kniss is active.

Publication


Featured researches published by Joe Michael Kniss.


graphics interface | 2007

Packet-based whitted and distribution ray tracing

Solomon Boulos; David Edwards; J. Dylan Lacewell; Joe Michael Kniss; Jan Kautz; Peter Shirley; Ingo Wald

Much progress has been made toward interactive ray tracing, but most research has focused specifically on ray casting. A common approach is to use packets of rays to amortize cost across sets of rays. Whether packets can be used to speed up the cost of reflection and refraction rays is unclear. The issue is complicated since such rays do not share common origins and often have less directional coherence than viewing and shadow rays. Since the primary advantage of ray tracing over rasterization is the computation of global effects, such as accurate reflection and refraction, this lack of knowledge should be corrected. We are also interested in exploring whether distribution ray tracing, due to its stochastic properties, further erodes the effectiveness of techniques used to accelerate ray casting. This paper addresses the question of whether packet-based ray tracing algorithms can be effectively used for more than visibility computation. We show that by choosing an appropriate data structure and a suitable packet assembly algorithm we can extend the idea of packets from ray casting to Whitted-style and distribution ray tracing, while maintaining efficiency.


ieee vgtc conference on visualization | 2010

Visualizing summary statistics and uncertainty

Kristin Potter; Joe Michael Kniss; Richard F. Riesenfeld; Christopher R. Johnson

The graphical depiction of uncertainty information is emerging as a problem of great importance. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence. The visual portrayal of this information is a challenging task. This work takes inspiration from graphical data analysis to create visual representations that show not only the data value, but also important characteristics of the data including uncertainty. The canonical box plot is reexamined and a new hybrid summary plot is presented that incorporates a collection of descriptive statistics to highlight salient features of the data. Additionally, we present an extension of the summary plot to two dimensional distributions. Finally, a use‐case of these new plots is presented, demonstrating their ability to present high‐level overviews as well as detailed insight into the salient features of the underlying data distribution.


computational science and engineering | 2009

Satisficing the Masses: Applying Game Theory to Large-Scale, Democratic Decision Problems

Kshanti A. Greene; Joe Michael Kniss; George F. Luger; Carl R. Stern

We present ongoing research on large-scale decision models in which there are many invested individuals. We apply our unique Bayesian belief aggregation approach to decision problems, taking into consideration the beliefs and utilities of each individual. Instead of averaging all beliefs to form a single consensus, our aggregation approach allows divergence in beliefs and utilities to emerge. In decision models this divergence has implications for game theory- enabling the competitive aspects in an apparent cooperative situation to emerge. Current approaches to belief aggregation assume cooperative situations by forming one consensus from diverse beliefs. However, many decision problems have individuals and groups with opposing goals, therefore this forced consensus does not accurately represent the decision problem. By applying our approach to the topical issue of stem cell research using input from many diverse individuals, we analyze the behavior of a decision model including the groups of agreement that emerge. We show how to find the Pareto optimal solutions, which represent the decisions in which no group can do better without another group doing worse. We analyze a range of solutions, from attempting to please everybody, with the solution that minimizes all emerging groups losses, to optimizing the outcome for a subset of individuals. Our approach has the long-reaching potential to help define policy and analyze the effect of policy change on individuals.


international conference on social computing | 2010

Representing Diversity in Communities of Bayesian Decision-makers

Kshanti A. Greene; Joe Michael Kniss; George F. Luger

High-quality information has emerged from the contributions of many using the wiki paradigm. A logical next step is to use the wisdom of the crowd philosophy to solve complex problems and produce informed policy. We introduce a new approach to aggregating the beliefs and preferences of many individuals to form models that can be used in social policy and decision-making. Traditional social choice functions used to aggregate beliefs and preferences attempt to find a single solution for the whole population, but may produce an irrational social choice when a stalemate between opposing objectives occurs. Our approach, called collective belief aggregation, partitions a population into collectives that share a preference order over the expected utilities of decision options or the posterior likelihoods of a probabilistic variable. It can be shown that if a group of individuals share a preference order over the options, their aggregate will uphold principles of rational aggregation defined by social choice theorists. Super-agents can then be formed for each collective that accurately represent the preferences of their collective. These super-agents can be used to represent the collectives in decision analysis and decision-making tasks. We demonstrate the potential of using collective belief aggregation to incorporate the objectives of stakeholders in policy-making using preferences elicited from people about healthcare policy.


Computer Graphics Forum | 2009

Resolution Independent NPR‐Style 3D Line Textures

Kristin Potter; Amy Ashurst Gooch; Bruce Gooch; Peter Willemsen; Joe Michael Kniss; Richard F. Riesenfeld; Peter Shirley

This work introduces a technique for interactive walk‐throughs of non‐photorealistically rendered (NPR) scenes using three‐dimensional (3D) line primitives to define architectural features of the model, as well as indicate textural qualities. Line primitives are not typically used in this manner in favour of texture mapping techniques which can encapsulate a great deal of information in a single texture map, and take advantage of GPU optimizations for accelerated rendering. However, texture mapped images may not maintain the visual quality or aesthetic appeal that is possible when using 3D lines to simulate NPR scenes such as hand‐drawn illustrations or architectural renderings. In addition, line textures can be modified interactively, for instance changing the sketchy quality of the lines. The technique introduced here extracts feature edges from a model, and using these edges, generates a reduced set of line textures which indicate material properties while maintaining interactive frame rates. A clipping algorithm is presented to enable 3D lines to reside only in the interior of the 3D model without exposing the underlying triangulated mesh. The resulting system produces interactive illustrations with high visual quality that are free from animation artifacts.


IEEE Transactions on Visualization and Computer Graphics | 2011

Supervised Manifold Distance Segmentation

Joe Michael Kniss; Guanyu Wang

We present a simple and robust method for image and volume data segmentation based on manifold distance metrics. This is done by treating the image as a function that maps the 2D (image) or 3D (volume) to a 2D or 3D manifold in a higher dimensional feature space. We explore a range of possible feature spaces, including value, gradient, and probabilistic measures, and examine the consequences of including these measures in the feature space. The time and space computational complexity of our segmentation algorithm is O(N), which allows interactive, user-centric segmentation even for large data sets. We show that this method, given appropriate choice of feature vector, produces results both qualitatively and quantitatively similar to Level Sets, Random Walkers, and others. We validate the robustness of this segmentation scheme with comparisons to standard ground-truth models and sensitivity analysis of the algorithm.


IEEE Transactions on Visualization and Computer Graphics | 2007

IStar: A Raster Representation for Scalable Image and Volume Data

Joe Michael Kniss; Warren A. Hunt; Kristin Potter; Pradeep Sen

Topology has been an important tool for analyzing scalar data and flow fields in visualization. In this work, we analyze the topology of multivariate image and volume data sets with discontinuities in order to create an efficient, raster-based representation we call IStar. Specifically, the topology information is used to create a dual structure that contains nodes and connectivity information for every segmentable region in the original data set. This graph structure, along with a sampled representation of the segmented data set, is embedded into a standard raster image which can then be substantially downsampled and compressed. During rendering, the raster image is upsampled and the dual graph is used to reconstruct the original function. Unlike traditional raster approaches, our representation can preserve sharp discontinuities at any level of magnification, much like scalable vector graphics. However, because our representation is raster-based, it is well suited to the real-time rendering pipeline. We demonstrate this by reconstructing our data sets on graphics hardware at real-time rates.


international conference on new trends in information and service science | 2009

Software's Eight Essentials

Hairong Lei; Michael Claus; Ron Rammage; C. David Baer; Rene Decool; Joe Michael Kniss; Stephen W. Clyde; Donald H. Cooley; Dongxia Liu

Past 10 years have seen many changes in software development. Software project failure rate is still high. Agile Analysis, lean software development, scrum, and eXtreme Programming have been hot topics in recent years. How to make smart decision based on your corporate culture and bringing software projects to completion in time, in budget, and in quality (Three-Ins) is still a big challenge. This paper presents Software’s Eight Essentials based on industrial and academic hard-won experiences. In many cases we present the good practices for software development. The goal is to give a guideline for modern software development and minimize software project failures.


international symposium on biomedical imaging | 2008

Managing uncertainty in visualization and analysis of medical data

Joe Michael Kniss

The principal goal of visualization is to create a visual representation of complex information and large datasets in order to gain insight and understanding. Our current research focuses on methods for handling uncertainty stemming from data acquisition and algorithmic sources. Most visualization methods, especially those applied to 3D data, implicitly use some form of classification or segmentation to eliminate unimportant regions and illuminate those of interest. The process of classification is inherently uncertain; in many cases the source data contains error and noise, data transformations such as filtering can further introduce and magnify the uncertainty. More advanced classification methods rely on some sort of model or statistical method to determine what is and is not a feature of interest. While these classification methods can model uncertainty or fuzzy probabilistic memberships, they typically only provide discrete, maximum a-posteriori memberships. It is vital that visualization methods provide the user access to uncertainly in classification or image generation if the results of the visualization are to be trusted.


international conference on machine learning and applications | 2008

Protein-Protein Interaction Prediction Using Single Class SVM

Hairong Lei; Joe Michael Kniss

We study the single class SVM (SCSVM) classifier performance on the positive data points while considering the impact of SCSVM on negative protein pair data points. We compare the result with the AA classifier (amino acids maximum entropy classifier) [9] to see if a better performance can be achieved for the same data configuration. The conclusion is that although positive classifier is slightly better than the negative one, the SCSVM classifier performance does not outperform the AA classifier for current data configuration. The vote strategy does not change the SCSVMs ROC behavior but increase the confidence of the true positive. Our explanation is that in SCSVM, only one class of training data is available. It is very hard to determine how tight the decision boundary should be to best characterize the known class. Due to the same reason, SCSVM tends to over-fit and under-fit easily. Furthermore, the SCSVMs performance depends on testing datas distribution.

Collaboration


Dive into the Joe Michael Kniss's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hairong Lei

University of New Mexico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruce Gooch

University of Victoria

View shared research outputs
Top Co-Authors

Avatar

Carl R. Stern

Los Alamos National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge