Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremiah Neubert is active.

Publication


Featured researches published by Jeremiah Neubert.


international symposium on mixed and augmented reality | 2007

Semi-Autonomous Generation of Appearance-based Edge Models from Image Sequences

Jeremiah Neubert; John Pretlove; Tom Drummond

Many of the robust visual tracking techniques utilized by augmented reality applications rely on 3D models and information extracted from images. Models enhanced with image information make it possible to initialize tracking and detect poor registration. Unfortunately, generating 3D CAD models and registering them to image information can be a time consuming operation. Regularly the process requires multiple trips between the site being modeled and the workstation used to create the model. The system presented in this work eliminates the need for a separately generated 3D model by utilizing modern structure-from-motion techniques to extract the model and associated image information directly from an image sequence. The technique can be implemented on any handheld device instrumented with a camera and network connection. The process of creating the model requires minimal user interaction in the form of a few cues to identify planar regions on the object of interest. In addition the system selects a set of keyframes for each region to capture viewpoint based appearance changes. This work also presents a robust tracking framework to take advantage of these new edge models. Performance of both the modeling technique and the tracking system are verified on several different objects.


computer science and information engineering | 2009

MARTI: Mobile Augmented Reality Tool for Industry

Brandon Stutzman; David Nilsen; Tanner Broderick; Jeremiah Neubert

A large number of augmented reality (AR) applications have been developed for handheld platforms built around ultra mobile computers. Unfortunately, these platforms, created for research, are not appropriate for demonstrations and user testing because their ad hoc nature affects the user experience. An AR platform designed for the average user will provide more accurate data on software usability. This paper presents such a platform designed with user input to meet their expectations. The result is a two handed device that a randomly selected group of users would want to have in their workplace. This platform is ideal for testing and commercializing industrial augmented reality applications.


arXiv: Computer Vision and Pattern Recognition | 2011

On-Board Visual Tracking with Unmanned Aircraft System (UAS)

Ashraf Qadir; Jeremiah Neubert; William H. Semke

This paper presents the development of a real time tracking algorithm that runs on a 1.2 GHz PC/104 computer on-board a small UAV. The algorithm uses zero mean normalized cross correlation to detect and locate an object in the image. A kalman filter is used to make the tracking algorithm computationally efficient. Object position in an image frame is predicted using the motion model and a search window, centered at the predicted position is generated. Object position is updated with the measurement from object detection. The detected position is sent to the motion controller to move the gimbal so that the object stays at the center of the image frame. Detection and tracking is autonomously carried out on the payload computer and the system is able to work in two different methods. The first method starts detecting and tracking using a stored image patch. The second method allows the operator on the ground to select the interest object for the UAV to track. The system is capable of re-detecting an object, in the event of tracking failure. Performance of the tracking system was verified both in the lab and on the field by mounting the payload on a vehicle and simulating a flight. Tests show that the system can detect and track a diverse set of objects in real time. Flight testing of the system will be conducted at the next available opportunity.


Journal of Intelligent and Robotic Systems | 2014

Vision Based Neuro-Fuzzy Controller for a Two Axes Gimbal System with Small UAV

Ashraf Qadir; William H. Semke; Jeremiah Neubert

This paper presents the development of a vision-based neuro-fuzzy controller for a two axes gimbal system mounted on a small Unmanned Aerial Vehicle (UAV). The controller uses vision-based object detection as input and generates pan and tilt motion and velocity commands for the gimbal in order to keep the interest object at the center of the image frame. A readial basis function based neuro-fuzzy system and a learning algorithm is developed for the controller to address the dynamic and non-linear characteristics of the gimbal movement. The controller uses two separate, but identical radial basis function networks, one for pan and one for tilt motion of the gimbal. Each system is initialized with a fixed number of neurons that act as rules basis for the fuzzy inference system. The membership functions and rule strengths are then adjusted with the feedback from the visual tracking system. The controller is trained off-line until a desired error level is achieved. Training is then continued on-line to allow the system to accommodate air speed changes. The algorithm learns from the error computed from the detected position of the object in image frame and generates position and velocity commands for the gimbal movement. Several tests including lab tests and actual flight tests of the UAV have been carried out to demonstrate the effectiveness of the controller. Test results show that the controller is able to converge effectively and generate accurate position and velocity commands to keep the object at the center of the image frame.


international conference on robotics and automation | 2002

Robust active stereo calibration

Jeremiah Neubert; Nicola J. Ferrier

We present a calibration procedure to determine the kinematic parameters of an active stereo system in a robot-centric frame of reference. Our goal was to obtain a solution of sufficient accuracy that the kinematic model information can be used to estimate scene structure given the measured motion/position of the eye and target objects image location. We formulate our problem using canonical coordinates of the rotation group. which enables a particularly simple closed form solution. Additionally, this formulation and solution provides quantitative measures of the resulting solutions. Experiments verify that the solutions are accurate, 3D structure can be estimated from the kinematic models, and the algorithm can indicate when image errors are large enough to produce unreliable results.


international conference on robotics and automation | 2001

Automatic training of a neural net for active stereo 3D reconstruction

Jeremiah Neubert; T. Hammond; N. Guse; Y. Do; Yu Hen Hu; Nicola J. Ferrier

Addresses the problem of recovering 3D geometry using an active stereo vision system. Calibration procedures can be adapted to the active stereo configuration, however, considerable effort is required to accurately model and calibrate the kinematics to avoid poor reconstruction. In the active stereo case there will also be errors due to uncertainty in the kinematics of the system. In addition, data collection needs to be automated because active stereo requires significantly more information for calibration. We present a biologically inspired neural network trained to determine the mapping between 3D geometry and stereo image points. To train the network, we have developed a system to automatically collect accurate calibration data. We compare the reconstructed 3D geometry obtained using a kinematic model based approach with our neural network approach.


international conference on pattern recognition | 2006

Direct Mapping of Visual Input to Motor Torques

Jeremiah Neubert; Nicola J. Ferrier

Most methods for visual control of robots formulate the robot command in joint or Cartesian space. To move the robot these commands are remapped to motor torques usually requiring a dynamic model of the robot. In this paper we present a method for parameterizing joint torques and learning to map visual input directly to them. The system is implemented and used to control a CRS 465 robot. The results of the implementation demonstrate that the parameterization of the torques allows both the motion and position of the robots end effectors to be controlled. Moreover, it is shown that it is possible to map visual input directly to joint torques


AIAA Infotech@Aerospace Conference | 2009

Optimizing UAS Multispectral Data Collection and Post Processing Techniques

David Dvorak; Jeremiah Neubert; William H. Semke; Jesse R. Sorum; Kyle B. Anderson; Richard R. Schultz

This paper describes the enhancements made to a system capable of capturing multispectral and position data that is processed into information useful to agriculturalists. The system consists of a digital imaging payload integrated into a small Unmanned Aircraft System (UAS). A commercially available multispectral camera and GPS unit are packaged along with supporting components to create a light-weight and inexpensive payload. A small UAS is used to fly the payload over a field; the generated data can then be processed and delivered to todays precision agriculture community. Site specific crop management is achievable if the captured multispectral images can be successfully mosaiced and geo- referenced. Image overlap, capture rate, and clarity determine whether or not collected images can be used to create an accurate representation of vegetation health. GPS stamps, indicating where particular images were captured, can be used to enhance the spatial accuracy of the rendered mosaic. The final product takes the form of a digital map depicting relative crop health within a field.


international conference on robotics and automation | 2010

Motion generation through biologically-inspired torque pulses

Jeremiah Neubert; Nicola J. Ferrier

Traditional robot controllers are not designed to produce human-like reactive motion—movements lasting tens of sample periods and requiring large accelerations. One of the major obstacles to producing reactive motions with contemporary controllers is that they rely on kinematic commands. The performance of short duration motions requiring large accelerations is dominated by the motions dynamics; kinematic commands without an accurate dynamic model of the robot and task will lead to poor performance. Conversely, this paper presents a biologically inspired “torque command” that allows the dynamics of the motion to be communicated to the controller. The commands can be produced with only minor modifications to any existing control scheme that will not impact traditional operation. In addition, sensory input can be mapped directly to presented commands for latency sensitive tasks. The ability of the new commands to express a broad range of motions with a small number of parameters is shown experimentally. The experimental results also show that the presented torque commands can be used to learn a ball intercept task.


scandinavian conference on image analysis | 2017

DEBC Detection with Deep Learning

Ian E. Nordeng; Ahmad Hasan; Doug Olsen; Jeremiah Neubert

This work presents a novel system utilizing state of the art deep convolutional neural networks to detect dead end body component’s (DEBC’s) to reduce costs for inspections and maintenance of high tension power lines. A series of data augmenting techniques were implemented to develop 2,437 training images which utilized 146 images from a sensor trade study, and a test flight using UAS for inspections. Training was completed using the Python implementation of Faster R-CNN’s object detection network with the VGG16 model. After testing the network on 111 aerial inspection photos captured with an UAS, the resulting convolutional neural network (CNN) was capable of an accuracy of 83.7% and precision of 91.8%. The addition of 270 training images and inclusion of insulators increased detection accuracy and precision to 97.8% and 99.1% respectively.

Collaboration


Dive into the Jeremiah Neubert's collaboration.

Top Co-Authors

Avatar

Deborah Worley

University of North Dakota

View shared research outputs
Top Co-Authors

Avatar

Mohammad Khavanin

University of North Dakota

View shared research outputs
Top Co-Authors

Avatar

Naima Kaabouch

University of North Dakota

View shared research outputs
Top Co-Authors

Avatar

William H. Semke

University of North Dakota

View shared research outputs
Top Co-Authors

Avatar

Ashraf Qadir

University of North Dakota

View shared research outputs
Top Co-Authors

Avatar

Nicola J. Ferrier

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ahmad Hasan

University of North Dakota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Dvorak

University of North Dakota

View shared research outputs
Researchain Logo
Decentralizing Knowledge