Jonathan Maycock
Bielefeld University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan Maycock.
Journal of The Optical Society of America A-optics Image Science and Vision | 2007
Jonathan Maycock; Bryan M. Hennelly; John McDonald; Yann Frauel; Albertina Castro; Bahram Javidi; Thomas J. Naughton
We present a digital signal processing technique that reduces the speckle content in reconstructed digital holograms. The method is based on sequential sampling of the discrete Fourier transform of the reconstructed image field. Speckle reduction is achieved at the expense of a reduced intensity and resolution, but this trade-off is shown to be greatly superior to that imposed by the traditional mean and median filtering techniques. In particular, we show that the speckle can be reduced by half with no loss of resolution (according to standard definitions of both metrics).
Applied Optics | 2006
Jonathan Maycock; Conor P. McElhinney; Bryan M. Hennelly; Thomas J. Naughton; John McDonald; Bahram Javidi
We propose a task-specific digital holographic capture system for three-dimensional scenes, which can reduce the amount of data sent from the camera system to the receiver and can effectively reconstruct partially occluded objects. The system requires knowledge of the object of interest, but it does not require a priori knowledge of either the occlusion or the distance the object is from the camera. Subwindows of the camera-plane Fresnel field are digitally propagated to reveal different perspectives of the scene, and these are combined to overcome the unknown foreground occlusions. The nature of the occlusions and the effect of subwindows are analyzed thoroughly by using the Wigner distribution function. We demonstrate that a careful combination of reconstructions from subwindows can reveal features that are not apparent in a reconstruction from the whole hologram. We provide results by using optically captured digital holograms of real-world objects and simulated occlusions.
Künstliche Intelligenz | 2010
Jonathan Maycock; Daniel Dornbusch; Christof Elbrechter; Robert Haschke; Thomas Schack; Helge Ritter
Grasping and manual interaction for robots so far has largely been approached with an emphasis on physics and control aspects. Given the richness of human manual interaction, we argue for the consideration of the wider field of “manual intelligence” as a perspective for manual action research that brings the cognitive nature of human manual skills to the foreground. We briefly sketch part of a research agenda along these lines, argue for the creation of a manual interaction database as an important cornerstone of such an agenda, and describe the manual interaction lab recently set up at CITEC to realize this goal and to connect the efforts of robotics and cognitive science researchers towards making progress for a more integrated understanding of manual intelligence.
eye tracking research & application | 2012
Kai Essig; Daniel Dornbusch; Daniel Prinzhorn; Helge Ritter; Jonathan Maycock; Thomas Schack
We implemented a system, called the VICON-EyeTracking Visualizer, that combines mobile eye tracking data with motion capture data to calculate and visualize the 3D gaze vector within the motion capture co-ordinate system. To ensure that both devices were temporally synchronized we used previously developed software by us. By placing reflective markers on objects in the scene, their positions are known and by spatially synchronizing both the eye tracker and the motion capture system allows us to automatically compute how many times and where fixations occur, thus overcoming the time consuming and error-prone disadvantages of the traditional manual annotation process. We evaluated our approach by comparing its outcome for a simple looking task and a more complex grasping task against the average results produced by the manual annotation process. Preliminary data reveals that the program only differed from the average manual annotation results by approximately 3 percent in the looking task with regard to the number of fixations and cumulative fixation duration on each point in the scene. In case of the more complex grasping task the results depend on the object size: for larger objects there was good agreement (less than 16 percent (or 950ms)), but this degraded for smaller objects, where there are more saccades towards object boundaries. The advantages of our approach are easy user calibration, the ability to have unrestricted body movements (due to the mobile eye-tracking system), and that it can be used with any wearable eye tracker and marker based motion tracking system. Extending existing approaches, our system is also able to monitor fixations on moving objects. The automatic analysis of gaze and movement data in complex 3D scenes can be applied to a variety of research domains, i. e., Human Computer Interaction, Virtual Reality or grasping and gesture research.
international conference on robotics and automation | 2014
Matthias Schröder; Jonathan Maycock; Helge Ritter; Mario Botsch
We present a method for real-time bare hand tracking that utilizes natural hand synergies to reduce the complexity and improve the plausibility of the hand posture estimation. The hand pose and posture are estimated by fitting a virtual hand model to the 3D point cloud obtained from a Kinect camera using an inverse kinematics approach. We use real human hand movements captured with a Vicon motion tracking system as the ground truth for deriving natural hand synergies based on principal component analysis. These synergies are integrated in the tracking scheme by optimizing the posture in a reduced parameter space. Tracking in this reduced space combined with joint limit avoidance constrains the posture estimation to natural hand articulations. The information loss associated with dimension reduction can be dealt with by employing a hierarchical optimization scheme. We show that our synergistic hand tracking approach improves runtime performance and increases the quality of the posture estimation.
ieee-ras international conference on humanoid robots | 2012
Matthias Schröder; Christof Elbrechter; Jonathan Maycock; Robert Haschke; Mario Botsch; Helge Ritter
We extend a recent low cost real-time method of hand tracking and pose estimation in order to control an anthropomorphic robot hand. The approach is data-driven and based on matching the current image of a color-gloved hand with the best fitting image in a database to retrieve the posture. Then, using depth information from a Kinect camera and a color-sensitive iterative closest point-to-triangle algorithm we can very accurately estimate the absolute position and orientation of the hand. The effectiveness of the approach is demonstrated in an application in which we actively control a 20 DOF anthropomorphic robot hand in a manual interaction grasping task.
hybrid artificial intelligence systems | 2010
Marcel Martin; Jonathan Maycock; Florian Schmidt; Oliver Kramer
The recognition of manual actions, i.e., hand movements, hand postures and gestures, plays an important role in human-computer interaction, while belonging to a category of particularly difficult tasks Using a Vicon system to capture 3D spatial data, we investigate the recognition of manual actions in tasks such as pouring a cup of milk and writing into a book We propose recognizing sequences in multidimensional time-series by first learning a smooth quantization of the data, and then using a variant of dynamic time warping to recognize short sequences of prototypical motions in a long unknown sequence An experimental analysis validates our approach Short manual actions are successfully recognized and the approach is shown to be spatially invariant We also show that the approach speeds up processing while not decreasing recognition performance.
intelligent robots and systems | 2011
Jonathan Maycock; Jan Frederik Steffen; Robert Haschke; Helge Ritter
To enable the creation of manual interaction databases, aiding the replication of dexterous capabilities with anthropomorphic robot hands by utilizing information about how humans perform complex manipulation tasks, requires the capability to record and analyze large amounts of manual interaction sequences. For this goal we have studied and compared three mappings from captured human hand motion data to a simulated model, which allow for robust and accurate real-time hand posture tracking. We evaluate the effectiveness of these mappings and discuss their pros and cons in various real-world scenarios. The first method is based on data glove data and aims for direct gaging of hand joints. The other two methods utilize a VICON motion tracking system which monitors markers placed on all finger segments. Here we compare two approaches: a direct computation of hand postures from angles between adjacent markers and an iterative inverse kinematics approach to optimally reproduce fingertip positions. For a quantitative evaluation, we employ a “calibration objects” technique to obtain a reliable ground truth of task-relevant hand posture data.
international conference on intelligent robotics and applications | 2011
Jan Frederik Steffen; Jonathan Maycock; Helge Ritter
We present a novel dataglove mapping technique based on parameterisable models that handle both the cross coupled sensors of the fingers and thumb, and the under-specified abduction sensors for the fingers. Our focus is on realistically reproducing the posture of the hand as a whole, rather than on accurate fingertip positions. The method proposed in this paper is a vision-free, object free, data glove mapping and calibration method that has been successfully used in robot manipulation tasks.
ieee-ras international conference on humanoid robots | 2015
Jonathan Maycock; Tobias Röhlig; Matthias Schröder; Mario Botsch; Helge Ritter
Optical motion tracking systems often require a lot of manual work to generate clean labeled trajectories. This can be a deterrent if the goal is the creation of large motion tracking datasets. Especially in the case of hand tracking, issues of occlusion (often self-occlusion by other fingers) make the post-processing task very difficult and time intensive. We introduce a fully automatic optical motion tracking method that utilizes a model based inverse kinematics approach. The Hungarian method is used to efficiently calculate associations between model markers and motion capture markers and we demonstrate an elegant solution to the problem of occlusions using a posture interpolation step.