Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph Holub is active.

Publication


Featured researches published by Joseph Holub.


artificial intelligence in education | 2018

Creating a Team Tutor Using GIFT

Stephen B. Gilbert; Anna Slavina; Michael C. Dorneich; Anne M. Sinatra; Desmond Bonner; Joan H. Johnston; Joseph Holub; Anastacia MacAllister; Eliot Winer

With the movement in education towards collaborative learning, it is becoming more important that learners be able to work together in groups and teams. Intelligent tutoring systems (ITSs) have been used successfully to teach individuals, but so far only a few ITSs have been used for the purpose of training teams. This is due to the difficulty of creating such systems. An ITS for teams must be able to assess complex interactions between team members (team skills) as well as the way they interact with the system itself (task skills). Assessing team skills can be difficult because they contain social components such as communication and coordination that are not readily quantifiable. This article addresses these difficulties by developing a framework to guide the authoring process for team tutors. The framework is demonstrated using a case study about a particular team tutor that was developed using a military surveillance scenario for teams of two. The Generalized Intelligent Framework for Tutoring (GIFT) software provided the team tutoring infrastructure for this task. A new software architecture required to support the team tutor is described. This theoretical framework and the lessons learned from its implementation offer conceptual scaffolding for future authors of ITSs.


Computers in Biology and Medicine | 2015

Evaluation of monoscopic and stereoscopic displays for visual-spatial tasks in medical contexts

Marisol Martinez Escobar; Bethany Junke; Joseph Holub; Kenneth Hisley; David J. Eliot; Eliot Winer

In the medical field, digital images are present in diagnosis, pre-operative planning, minimally invasive surgery, instruction, and training. The use of medical digital imaging has afforded new ways to interact with a patient, such as seeing fine details inside a body. This increased usage also raises many basic research questions on human perception and performance when utilizing these images. The work presented here attempts to answer the question: How would adding the stereopsis depth cue affect relative position tasks in a medical context compared to a monoscopic view? By designing and conducting a study to isolate the benefits between monoscopic 3D and stereoscopic 3D displays in a relative position task, the following hypothesis was tested: stereoscopic 3D displays are beneficial over monoscopic 3D displays for relative position judgment tasks in a medical visualization setting. 44 medical students completed a series of relative position judgments tasks. The results show that stereoscopic condition yielded a higher score than the monoscopic condition with regard to the hypothesis.


Proceedings of SPIE | 2013

Real-time volume rendering of digital medical images on an iOS device

Christian Noon; Joseph Holub; Eliot Winer

Performing high quality 3D visualizations on mobile devices, while tantalizingly close in many areas, is still a quite difficult task. This is especially true for 3D volume rendering of digital medical images. Allowing this would empower medical personnel a powerful tool to diagnose and treat patients and train the next generation of physicians. This research focuses on performing real time volume rendering of digital medical images on iOS devices using custom developed GPU shaders for orthogonal texture slicing. An interactive volume renderer was designed and developed with several new features including dynamic modification of render resolutions, an incremental render loop, a shader-based clipping algorithm to support OpenGL ES 2.0, and an internal backface culling algorithm for properly sorting rendered geometry with alpha blending. The application was developed using several application programming interfaces (APIs) such as OpenSceneGraph (OSG) as the primary graphics renderer coupled with iOS Cocoa Touch for user interaction, and DCMTK for DICOM I/O. The developed application rendered volume datasets over 450 slices up to 50-60 frames per second, depending on the specific model of the iOS device. All rendering is done locally on the device so no Internet connection is required.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

The Challenges of Building Intelligent Tutoring Systems for Teams

Desmond Bonner; Stephen B. Gilbert; Michael C. Dorneich; Eliot Winer; Anne M. Sinatra; Anna Slavina; Anastacia MacAllister; Joseph Holub

Intelligent Tutoring Systems have been useful for individual instruction and training, but have not been widely created for teams, despite the widespread use of team training and learning in groups. This paper reviews two projects that developed team tutors: the Team Multiple Errands Task (TMET) and the Recon Task developed using the Generalized Intelligent Framework for Tutoring (GIFT). Specifically, this paper 1) analyzes why team tasks have significantly more complexity than an individual task, 2) describes the two team-based platforms for team research, and 3) explores the complexities of team tutor authoring. Results include a recommended process for authoring a team intelligent tutoring system based on our lessons learned that highlights the differences between tutors for individuals and team tutors.


12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference | 2012

Contextual Self-Organizing Map Visualization to Improve Optimization Solution Convergence

Joseph Holub; Trevor Richardson; Matthew Dryden; Shawn La Grotta; Eliot Winer

5is a type of artificial neural network that uses dimensionality reduction to allow for visualization of a high dimensional problem in a low dimensional space, while preserving the topology of the data itself. Kohnen’s SOMs, however, do not allow the map to categorize the data represented in each node. Contextual SOMs alleviate this problem by labeling individual nodes. This allows a user to quickly identify each node, providing an overall view of the design space.뀀ഀȠ Using CSOMs as a pre-optimization step allows a designer to select an initial starting point for an algorithm and to select an optimization method based on the modality and curvature of the data. By identifying nodes that may contain minimum values the optimization algorithm is passed starting points that may increase the solution accuracy, reliability while decreasing solution time. In this study multiple unimodal and multimodal optimization problems were solved using CSOMs as a pre-optimization step. Multi-modal problems were solved using a pheromone particle swarm optimization method (PSO) 6 while unimodal problems were solved using a QuasiNewton Line search implemented through Matlab 7 .뀀ഀȠ 뀀ഀȠ


AIAA Journal | 2014

Visualizing Design Spaces Using Two-Dimensional Contextual Self-Organizing Maps

Trevor Richardson; Brett Nekolny; Joseph Holub; Eliot Winer

Visualization of design spaces is a complex problem that has the potential to provide many benefits. Design spaces can be easily visualized with two or three design variables using a range of methods. However, once a problem exceeds this limit, direct visualization that captures all necessary behaviors becomes difficult. To visualize these higher dimensions, it is necessary to use visual cues such as color, size, and/or symbols to show the added dimensions. The disadvantage to using visual cues is the inability to expand much beyond three dimensions. This research focuses on using contextual self-organizing maps to provide a solution to visualizing high-dimensional design spaces by using the dimensionality-reduction capabilities of self-organizing maps. A visual representation is created by generating a self-organizing map and applying objective function values as the contextual labels. The map is then broken into three different maps containing separate contextual information, namely the mean, minimum, a...


Proceedings of SPIE | 2013

Comparing the Microsoft Kinect to a traditional mouse for adjusting the viewed tissue densities of three-dimensional anatomical structures

Bethany Juhnke; Monica Berron; Adriana Philip; Jordan Williams; Joseph Holub; Eliot Winer

Advancements in medical image visualization in recent years have enabled three-dimensional (3D) medical images to be volume-rendered from magnetic resonance imaging (MRI) and computed tomography (CT) scans. Medical data is crucial for patient diagnosis and medical education, and analyzing these three-dimensional models rather than two-dimensional (2D) slices would enable more efficient analysis by surgeons and physicians, especially non-radiologists. An interaction device that is intuitive, robust, and easily learned is necessary to integrate 3D modeling software into the medical community. The keyboard and mouse configuration does not readily manipulate 3D models because these traditional interface devices function within two degrees of freedom, not the six degrees of freedom presented in three dimensions. Using a familiar, commercial-off-the-shelf (COTS) device for interaction would minimize training time and enable maximum usability with 3D medical images. Multiple techniques are available to manipulate 3D medical images and provide doctors more innovative ways of visualizing patient data. One such example is windowing. Windowing is used to adjust the viewed tissue density of digital medical data. A software platform available at the Virtual Reality Applications Center (VRAC), named Isis, was used to visualize and interact with the 3D representations of medical data. In this paper, we present the methodology and results of a user study that examined the usability of windowing 3D medical imaging using a Kinect™ device compared to a traditional mouse.


53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference<BR>20th AIAA/ASME/AHS Adaptive Structures Conference<BR>14th AIAA | 2012

Three Dimensional Multi-Objective UAV Path Planning Using Digital Pheromone Particle Swarm Optimization

Joseph Holub; Jung Leng Foo; Vijay Kalivarapu; Eliot Winer

Military operations are turning to more complex and advanced automation technology for minimum risk and maximum efficiency. A critical piece to this strategy is unmanned aerial vehicles (UAVs). UAVs require the intelligence to safely maneuver along a path to an intended target, avoiding obstacles such as other aircraft or enemy threats. Often automated path planning algorithms are employed to specify targets for a UAV to investigate. To date, path-planning algorithms have been limited to providing only a single solution (alternate path) without further input from a pilot. This paper uses digital pheromones to improve upon a previously developed multi-objective path planner that uses Particle Swarm Optimization (PSO) to generate multiple solution paths based on predefined criteria. The problem formulation is designed to minimize risk due to enemy threats and fuel consumption while maximizing reconnaissance and eliminating terrain violations. The implementation of digital pheromone PSO increases the efficiency and reliability of paths returned to the operator. The decrease in iterations allows alternate paths to be returned in real time, aiding in efficient decision making by the UAV operator. The implementation of Digital Pheromone PSO is described below along with the results of simulated scenarios.


15th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference | 2014

Improving Contextual Self-Organizing Map Solution Times Using GPU Parallel Training

Trevor Richardson; Joseph Holub; Eliot Winer

Visualizing n-dimensional design or optimization data is very challenging using current methods and technologies. Many current techniques perform dimensionality reduction or other “compression” methods to show views of the data in two or three dimensions. Designers are left to infer the relationships with other independent and dependent variables being considered. Contextual self-organizing maps offer a way to process a view and interact with all dimensions of design data simultaneously. Contextual self-organizing maps are a form of neural network that can be used to understand the complex relationships between large amounts of high-dimensional data, as was shown in previous work by the authors. This original formulation of contextual self-organizing maps used a sequential training method that took significant amounts of training time with large datasets. Batch self-organizing maps provide a data-independent training method that allows the training process to be parallelized. This research parallelizes the batch self-organizing map by combining networkpartitioning and data-partitioning methods with CUDA on the GPU to achieve significant training time reductions.


Journal of Digital Imaging | 2017

Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device

Joseph Holub; Eliot Winer

Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7” iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.

Collaboration


Dive into the Joseph Holub's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adriana Philip

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge