Eye movement velocity and gaze data generator for evaluation, robustness testing and assess of eye tracking software and visualization tools
EEye movement velocity and gaze data generatorfor evaluation, robustness testing and assess ofeye tracking software and visualization tools.
Wolfgang Fuhl and Enkelejda Kasneci Eberhard Karls University, Sand 14, 72076 Tuebingen, Germany, { wolfgang.fuhl,enkelejda.kasneci } @uni-tuebingen.de Abstract.
Eye movements hold information about human perception,intention, and cognitive state. Various algorithms have been proposed toidentify and distinguish eye movements, particularly fixations, saccades,and smooth pursuits. A major drawback of existing algorithms is thatthey rely on accurate and constant sampling rates, impeding straightfor-ward adaptation to new movements such as microsaccades. We propose anovel eye movement simulator that i) probabilistically simulates saccademovements as gamma distributions considering different peak velocitiesand ii) models smooth pursuit onsets with the sigmoid function. Ad-ditionally, it is capable of producing velocity and two-dimensional gazesequences for static and dynamic scenes using saliency maps or real fix-ation targets. Our approach is also capable of simulating any samplingrate, even with fluctuations. The simulation is evaluated against publiclyavailable real data using a squared error. The Matlab code for the sim-ulator can be downloaded at http://ti.uni-tuebingen.de/Projekte.1801.0.html or used in EyeTrace.
Keywords: eye movement, simulation, generation, fixation, saccade,smooth pursuit, saliency maps, gaze mapping
Eye movements hold valuable information about a subject, and his cognitivestates [3,16] and are also important for the diagnosis of defects and diseases ofthe eyes (many examples can be found in [20]). Therefore, the detection anddifferentiation of eye movement types have to be accurate. Most algorithmsfor eye movement detection apply different dispersion, velocity or accelerationthresholds and validate the detected eye movements based on their duration.This approach seems to be unsatisfactory [1] at its current state. This is partiallydue to unstable/dynamic sampling rates of eye tracking devices, task-specificsources of noise, the interpolation method applied to the data by the eye tracker,and several more [5,8]. Depending on the task at hand, different thresholds areproposed in the literature [13]. It is especially difficult to adjust these thresholdsfor inconsistent sampling rates and noise which is not annotated by the eyetracker. Some commercial eye-tracker differ between tracking the eye and pupil a r X i v : . [ c s . H C ] S e p W. Fuhl and E. Kasneci and re-detecting them after a tracking loss, where the latter requires significantlymore processing time and thus results in a decreased frame rate. Therefore,the identification of eye movements is still a difficult task; it complicates toconfidently generalize research findings across experiments [1].We propose an eye movement simulator to generate data similar to the dataof eye-trackers. This is especially useful if algorithms have to be evaluated ordata is necessary to test the robustness of software working with eye trackingdata. In addition, the generated data can be used to asses visualizations. Theproposed simulator currently contains the following features: – Generate velocity profiles of Saccades, Fixations and Smooth Pursuits basedon scientific findings. – Generate random sequences following predefined orders. – Generate static and dynamic sampling rates. – Supports any sampling rate. – Generate gaze positions for static images using saliency maps. – Generate new eye tracking data using real data mapped to a saliency mapor to real fixation targets. – Generate gaze positions for dynamic scenes like EPIC-Kitchens [6] usingsaliency maps or real fixation targets.
While there are well-established findings about the gaze signal itself, its synthesisis still challenging. In the Eyecatch [32] simulator, a Kalman filter is used to pro-duce a gaze signal for saccades and smooth pursuits. While the signal itself wassimilar to real eye-tracking recordings, the jitter was missing. The first approachfor rendering realistic and dynamic eye movements was proposed in [19], wherethe main focus was on saccadic eye movements. It also included smooth pursuits,binocular rotations (vergence) and the combination of eye and head rotations.The first data-driven approaches where proposed in [21] and [25]. Both simulatethe head and eye movements together in order to generate eye-tracking data. Themain disadvantage of [21] was that head motion seemed to trigger eye movement.In fact, the head orientation is only changed if the necessary amplitude of theeye is larger than a specific threshold [22] ( ≈ ◦ ). Another data-driven ap-proach was proposed in [18], where an automated framework for head motion,gaze, and eyelid simulation was developed. The framework generates data basedon speech input using trained Gaussian Mixture Models. While this approachis capable of synthesizing nonlinear data, it only generates unperturbed gazedirections. The approach in [10] models eye rotations using specific eye relatedquaternions for oculomotor rotations as proposed in [29]. The main disadvantageof this approach is that the synthetic eyes cannot be rotated automatically. Theapproach in [31] produces gaze vectors and eye images to train machine learningapproaches for gaze prediction, but does not synthesize realistic eye movements.All of the aforementioned approaches have their origin in computer graphicswith the goal to generate visually realistic head movement and gaze data. The ye movement velocity and gaze data generator 3 main application of those simulators is to produce realistic interacting virtualhumans using parametric models [2,24]. This leads to the disadvantage, that allmovements in the generated data are perfect optimal representatives. In reality,the raw gaze data in eye movements contains noise introduced either throughactual movements such as microsaccades or inaccuracies of the used eye-tracker.The first approach to simulate a realistic scan path, i.e., a sequence of fixationsand saccades, on static images was proposed in [4]. They use a saliency maptogether with a unified Bayesian model to generate realistic random walks overa stimulus. A pure gaze data simulation approach including noise was proposed in[7]. Based on this approach, [9] further improves the noise synthesis by simulatingjitter as a normal distribution.Our approach combines the before mentioned publications by simulating ve-locity profiles of three eye movement types and map them to saliency maps.In contrast, our approach is capable of simulating noise as uniform and normaldistribution and also allows to produce any sampling rate (static and dynamic).We also extended the mapping functionality to allow the usage of real fixationtargets and use dynamic scenes instead of only static images. Fig. 1.
Work-flow of generating eye movement data. First, a sequence of eye movementtypes is generated. In the second step, a model of each eye movement type is generated(F: Fixation, S: Saccade, SM: Smooth pursuit). This model allows for an almost infinitesampling rate, which is in the next stage interpolated to a target sampling rate (Red:Fixation, Green: Saccade, Blue: Smooth pursuit). Finally, noise is added on top ofthe signal (gray). The last stage is the mapping of the generated sequence to targetlocations. W. Fuhl and E. Kasneci
The entire work-flow of the simulator is shown in Figure 1. Generating an eyemovement velocity profile is done in four steps. The first step chooses a sequenceof eye movement types (Fixation, Saccade, Smooth pursuit) without any time orvelocity constraints. Afterward, each movement type in this sequence is assigneda velocity profile generated by preliminary set parameters. The mathematicalmodel behind these profiles allows sampling at an extremely high, almost arbi-trary rate. The target sampling rate is obtained by interpolating the computedfrequency, which also allows for dynamically adjusting the target sampling rate.In the next step, noise is added which represents measurement errors or blinks.The generated velocity profile is then mapped to two-dimensional locations on astimulus which are created using saliency maps ([12,15,14]) or real fixation tar-gets. This also allows to remap real eye tracking data or use a dynamic stimulustaking into account the duration of the individual eye movement types. Each stepof this eye movement simulator is described in the following subsections in moredetail. The simulator also includes a random walker generator to model fixationdirection [11]; saccade and smooth pursuit directions are generated randomly(but consistently within a movement) since this is stimuli- and task-dependent.
Generating a sequence of eye movement types can be done either by samplingfrom a uniform distribution, setting it manually, or by following constructionconstraints. In case of the uniform distributed eye movements, the generatorscript randomly selects between three types of eye movements. If the amountof each type is specified a priori, the probability is automatically adjusted. Thismeans that after each insertion the probabilities are computed based on theremaining quantity of each type to favor higher quantities. This process canalso be constrained, e.g., by forcing the algorithm to insert a saccade after eachfixation or before a smooth pursuit.
Fixations are generated based on two probability distributions which can bespecified and parametrized. The first distribution determines its duration, thesecond the consistency of the fixation. For the duration and consistency, theminimum and maximum can be set. As distributions, the simulator providesNormal and Uniform random number generation. For the Normal distribution,the standard deviation can be specified. consistency describes the fluctuationsin the velocity profile and is used as such in the entire document.In Figure 2, two artificially generated fixations are shown. The consistencywas set to one degree per second and the standard deviation for the normaldistribution to two (Figure 2 (a)). As can be seen in the figure, the Uniformdistribution looks more similar compared to real data although we have set theconsistency very high with one degree per second. ye movement velocity and gaze data generator 5(a) (b)
Fig. 2.
Generated fixation based on a Normal (a) and Uniform (b) distribution.
The most complex part of the eye movement generator is the saccades. For thelength, we follow the same approach as for the fixations, in which a minimumand maximum length have to be set. The selectable distributions are Normaland Uniform. The result of the length also influences the maximum speed of thesaccade. Therefore, the two random numbers are multiplied (both in the rangebetween zero and one). This means that shorter saccades are limited to lowermaximal velocities. To generate the velocity profile, minimum, maximum andthe distribution type have to be set.The most characteristic property of a saccade is its velocity profile. In oursimulator, this is generated as a Gamma distribution. Therefore, the minimumand maximum skewness have to be specified. In [30] it was found that the Gammafunction can be considered suitable to approximate saccade profiles (yet notperfect). To achieve more realistic data, a consistency minimum, maximum anddistribution can be specified. This generates the jitter along the velocity profile. (a) (b)(c) (d)
Fig. 3.
Generated saccades with jitter (b,d) and without (a,c). For (a) and (b), thedistribution was skewed to the left. In (c) and (d), Gamma distribution was only slightlyskewed.
Figure 3 shows some generated saccades of fixed length. We simulated twolarge and two slightly left skewed saccades. The maximum velocity was selectedfrom a range between 300 and 500 degrees per second. As can be seen fromthe Figure, the profile contains on- an offset of a saccade. The profile itself issmooth and follows the Gamma distribution. Post-saccadic movement is as of
W. Fuhl and E. Kasneci now missing in the simulator. In Figure 3(b) and (d), a small amount of jitterwas added to simulate measurement inaccuracy. This usually occurs through theapproximation on image pixels or ellipse fit inaccuracy in pupil detection.
For generating smooth pursuits we also simulate the onset following the findingsin [23]. The authors did not provide a final function for the description of thevelocity profile but visualized and described it precisely. The shape of the onsetof a smooth pursuit follows a nonlinear growing function similar to the sigmoidfunction. While this equation is not scientifically proven, our framework allowsto simply replace it once a better model is available. The most complex part ofthe pursuit model is the onset, followed by a regular movement.The parameters that can be specified are the minimum and maximum lengthtogether with their distribution type. For the velocity and the length of theonset, the same parameters can be adjusted. To include the measuring error, theconsistency parameters are also configurable. For the pursuit itself, we includedlinear growing, decreasing and constant profiles. In case of the growing, again theminimum, maximum and consistency function can be specified. Figure 4 shows (a) (b) (c) (d) (e) (f)
Fig. 4.
Generated smooth pursuits with jitter (b,d,f) and without (a,c,e). For (a) and(b), the pursuit movement was constant. In (c,d) and (e,f) it was linear increasing anddecreasing. simulated smooth pursuits. For the visualization of the linear decreasing andincreasing function, extreme values were used. The first column shows a smoothpursuit for a constantly moving object, which is often observed in laboratoryexperiments. The increasing and decreasing profiles are for objects which movefurther away or come closer to the subject with a constant speed. Other profilesmay occur in real settings too, where the object has a slightly varying speed butthese are future extensions of the generator and not part of this paper.
After generating and linking the eye movements, they have to be interpolated to asampling rate. This is necessary to simulate different recording frequencies. Hereit is important to mention that not all modern eye trackers record at a constantfrequency. On the one hand, image acquisition rates can vary depending onillumination changes that affect the aperture time of the camera and timestamps ye movement velocity and gaze data generator 7 generated by the eye-tracker can vary in accuracy. On the other hand, imageprocessing time, e.g. for eye and pupil detection, are not necessarily constant andmight change depending on how easy the pupil can be identified. For example,detection of the pupil is usually more time-consuming than keeping track ofa previously detected pupil. Some systems, especially when running on mobiledevices, may run into a state where frames are dropped in order to maintainreal-time performance. We found systems where the timestamps are generatedby the CPU time (which may be inaccurate for fast sampling rates) and eventimestamps that are generated after image processing. Therefore, our simulatoris capable of simulating varying sampling rates. The parameters for this step arethe minimum and maximum sampling rate and also the consistency function.The interpolation itself computes the mean of all values from the last samplingposition to the new sampling position. (a) (b)(c) (d)
Fig. 5.
Generated velocity profile of an eye movement sequence (a). In (b), the data issampled at 60Hz without variations. (b) and (c) vary between 50 and 70 Hz with theNormal and the Uniform distribution.
In Figure 5(a) a generated velocity profile is shown. The initial sampling fre-quency was set to 1000 Hz but any other sampling rate is possible. For (b), aconstant sampling frequency of 60 Hz was used. In (c), the sampling frequencyvaries between 50 and 70 Hz (with a mean of 60 Hz), wherein the Normal dis-tribution was used as the random number generator. It differs significantly fromthe constant sampling rate in (a) and also has a different length. For (d), thesampling frequency also varied between 50 and 70 Hz with the difference thatthe Uniform distribution was used as the random number generator. The lengthis therefore similar to the constant sampling rate but it still differs especially forthe saccadic peeks.
For generating noise, two distributions are used: one for the location where toplace the noise in the data and the second for the velocity change to apply.
W. Fuhl and E. Kasneci
Therefore, the user has to specify the types for both distributions and the min-imum and maximum velocity of noise. The amount of noise is specified as apercentage of the samples that should be influenced. (a) (b) (c)
Fig. 6.
Generated velocity profile of an eye movement sequence (a). In (b), noise isadded based on a Normal distribution and in (c) a Uniform distribution was used.
Figure 6 shows two types of Noise added to the velocity profile shown in (a).The amount of noise added was 10%. For the Normally distributed noise in (b)it can be seen that the peaks are mostly high. In comparison to it, the Uniformdistributed noise in (c) produces more peaks of different heights.
Fig. 7.
Scenarios of mapping eye movement data. First, a video sequence is convertedto saliency maps and afterward gaze data is mapped to them based on the frame rate.The central part shows the mapping of generated or real data to a new stimulus image.At the bottom real data is mapped to the same stimulus based on fixation locationsand saccade durations. The upper series of images is taken from EPIC kitchen [6].
The mapping function of the simulator is used to produce spatial data outof the velocity profiles (Figure 7). This functionality can be used for simulatedand real data (fixations, saccades, smooth pursuits). Therefore, possible fixa-tion targets have to be identified for which our simulator includes three saliency ye movement velocity and gaze data generator 9 maps ([12,15,14]). As locations, the local maxima of these saliency maps areused. In addition, a small random shift of the local maxima is also included asthe possible target to simulated close consecutive fixations. Afterward, a typeof eye movement is selected and randomly generated by a predefined parameterset including a duration range. For saccades and smooth pursuits, this durationis interpolated to the distance between the last and new fixation target takinginto account the speed of the individual sample points. In addition, a maximaldeviation range can be defined based on which the gaze points differ to thestraight line between both positions. For fixations, the scattering is generatedbased on the deviation parameter (inaccuracy of the measurement simulation).For smooth pursuits, both approaches are used to map the velocity profile be-tween to locations. Since it is possible to generate a velocity profile out of realdata to the aforementioned approaches can also be used to map real fixations,saccades, and smooth pursuits to new stimulus images. For the generation of newdata out of real data for the same stimulus image, we propose to randomly selectan eye movement type out of the real data and use the centers of all fixations aspossible targets. An example can be seen at the bottom of Figure 7.For dynamic scenes, the generated data can be mapped based on the sameapproach (local maxima of saliency maps). The only thing that differs is thatthe local maxima are time-dependent. This means that for a saccade the twolocations have to be selected out of two different sets of local maxima which arecomputed based on the timestamps of frames in the video (Figure 7 top). (a) (b) (c)
Fig. 8.
Squared velocity error for the simulation per data set.
Figure 8 shows the per sample point squared error as whisker plots of oursimulator in comparison to the publicly available datasets [17,27,28,26]. The errorwas computed based on the squared difference between each sample. Therefore,we simulated each fixation, saccade, and smooth pursuit ten times with thesame length as in the available data sets. For a fixation, the simulator got theinformation of the mean velocity and the standard deviation to generate a profile.
The information of a saccade was the peak velocity and the position of this peak.For smooth pursuits, the simulator got the information of the mean velocity andthe standard deviation.As can be seen in Figure 8(b), the error for saccades was the largest. This isdue to the noisy signal which is due to the inter sample velocity computation.Figure 9 shows some saccades which produced high squared errors. The red line (a) (b) (c)
Fig. 9.
Saccades with a high squared error. Red is the simulation and blue is the realdata. corresponds to the simulation result, whereas the blue line corresponds to the realdata. As can be seen, the course of the velocity profile is well simulated, which iswell in line with previous findings in [30]. The high errors originate mainly frommeasurement inaccuracies in the real data. This also highlights the difficulty indetecting eye movements in such a signal. For the data set from [26](I-BDT), theerror for saccades was lowest. This is due to the low sampling rate of the usedeye tracker (30 Hz), for which large fluctuations do not occur. This is similar tosmoothing or using multiple samples for the velocity computation. In contrast,the smooth pursuits error was the largest in the I-BDT data set. This is becausein such low sampling rates the onset of a smooth pursuit is hardly represented.Our simulator is capable of simulating this (sampling 3.5) but for the evaluation,it was not used. We only used the generators to simulate the eye movements.
We proposed a novel eye movement simulator which is capable of creating map-pings to static images and dynamic scenes based on saliency maps or real fixationtargets. Optionally the framework is also capable of remapping real eye trackingdata onto new stimuli or generate a new scan path based on real data for the samestimuli. In addition, it can generate data for any static and dynamic samplingrate. The currently included eye movement types are Fixations, Saccades andSmooth Pursuits which can be parameterized. Variations and noise can be gen-erated using different distributions for noise, sampling shift, eye tracker accuracyetc. Further research will be the extension of the simulator to be also capable of ye movement velocity and gaze data generator 11 generating post saccadic, optokinetic and vestibular-ocular movement. In addi-tion, smooth pursuits have to follow an object for which we want to include akey point registration and detection to compute possible locations for this typeof eye movements in videos.
References
1. Andersson, R., Larsson, L., Holmqvist, K., Stridh, M., Nystr¨om, M.: One algorithmto rule them all? an evaluation and discussion of ten eye movement event-detectionalgorithms. Behavior Research Methods (2), 616–637 (2017)2. Andrist, S., Pejsa, T., Mutlu, B., Gleicher, M.: Designing effective gaze mechanismsfor virtual agents. In: Proceedings of the SIGCHI conference on Human Factors inComputing Systems. pp. 705–714. ACM (2012)3. Braunagel, C., Geisler, D., Rosenstiel, W., Kasneci, E.: Online recognition of driver-activity based on visual scanpath classification. IEEE Intelligent TransportationSystems Magazine (4), 23–36 (2017)4. Campbell, D.J., Chang, J., Chawarska, K., Shic, F.: Saliency-based bayesian mod-eling of dynamic viewing of static scenes. In: Proceedings of the Symposium onEye Tracking Research and Applications. pp. 51–58. ACM (2014)5. Cornelissen, F.W., Peters, E.M., Palmer, J.: The eyelink toolbox: eye tracking withmatlab and the psychophysics toolbox. Behavior Research Methods, Instruments,& Computers (4), 613–617 (2002)6. Damen, D., Doughty, H., Farinella, G.M., Fidler, S., Furnari, A., Kazakos, E.,Moltisanti, D., Munro, J., Perrett, T., Price, W., Wray, M.: Scaling egocentricvision: The epic-kitchens dataset. arXiv preprint arXiv:1804.02748 (2018)7. Duchowski, A., J¨org, S., Lawson, A., Bolte, T., ´Swirski, L., Krejtz, K.: Eye move-ment synthesis with 1/f pink noise. In: Proceedings of the 8th ACM SIGGRAPHConference on Motion in Games. pp. 47–56. ACM (2015)8. Duchowski, A.T.: A breadth-first survey of eye-tracking applications. BehaviorResearch Methods, Instruments, & Computers (4), 455–470 (2002)9. Duchowski, A.T., J¨org, S., Allen, T.N., Giannopoulos, I., Krejtz, K.: Eye move-ment synthesis. In: Proceedings of the Symposium on Eye Tracking Research andApplications. pp. 147–154. ACM (2016)10. Duchowski, A., J¨org, S.: Modeling physiologically plausible eye rotations. Proceed-ings of Computer Graphics International (2015)11. Engbert, R., Mergenthaler, K., Sinn, P., Pikovsky, A.: An integrated model offixational eye movements and microsaccades. Proceedings of the National Academyof Sciences (39), E765–E770 (2011)12. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in neuralinformation processing systems. pp. 545–552 (2007)13. Holmqvist, K., Nystr¨om, M., Andersson, R., Dewhurst, R., Jarodzka, H., Van deWeijer, J.: Eye tracking: A comprehensive guide to methods and measures. OUPOxford (2011)14. Hou, X., Harel, J., Koch, C.: Image signature: Highlighting sparse salient regions.IEEE transactions on pattern analysis and machine intelligence (1), 194–201(2012)15. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapidscene analysis. IEEE Transactions on pattern analysis and machine intelligence (11), 1254–1259 (1998)2 W. Fuhl and E. Kasneci16. K¨ubler, T.C., Rothe, C., Schiefer, U., Rosenstiel, W., Kasneci, E.: Subsmatch 2.0:Scanpath comparison and classification based on subsequence frequencies. BehaviorResearch Methods (3), 1048–1064 (2017)17. Larsson, L., Nystr¨om, M., Stridh, M.: Detection of saccades and postsaccadic os-cillations in the presence of smooth pursuit. IEEE Transactions on BiomedicalEngineering (9), 2484–2493 (2013)18. Le, B.H., Ma, X., Deng, Z.: Live speech driven head-and-eye motion generators.IEEE transactions on Visualization and Computer Graphics (11), 1902–1914(2012)19. Lee, S.P., Badler, J.B., Badler, N.I.: Eyes alive. In: ACM Transactions on Graphics(TOG). vol. 21, pp. 637–644. ACM (2002)20. Leigh, R.J., Zee, D.S.: The neurology of eye movements, vol. 90. Oxford UniversityPress, USA (2015)21. Ma, X., Deng, Z.: Natural eye motion synthesis by modeling gaze-head coupling.In: IEEE Virtual Reality Conference. pp. 143–150. IEEE (2009)22. Murphy, H., Duchowski, A.T.: Perceptual gaze extent & level of detail in vr: lookingoutside the box. In: ACM SIGGRAPH conference Abstracts and Applications. pp.228–228. ACM (2002)23. Ogawa, T., Fujita, M.: Velocity profile of smooth pursuit eye movements in humans:pursuit velocity increase linked with the initial saccade occurrence. Neuroscienceresearch (3), 201–209 (1998)24. Pejsa, T., Mutlu, B., Gleicher, M.: Stylized and performative gaze for characteranimation. In: Computer Graphics Forum. vol. 32, pp. 143–152. Wiley OnlineLibrary (2013)25. Peters, C., Qureshi, A.: A head movement propensity model for animating gazeshifts and blinks of virtual characters. Computers and Graphics (6), 677–687(2010)26. Santini, T., Fuhl, W., K¨ubler, T., Kasneci, E.: Bayesian identification of fixations,saccades, and smooth pursuits. In: Proceedings of the Symposium on Eye TrackingResearch and Applications. pp. 163–170. ACM (2016)27. Startsev, M., Agtzidis, I., Dorr, M.: Smooth pursuit. http://michaeldorr.de/smoothpursuit/ (2016)28. Startsev, M., Agtzidis, I., Dorr, M.: Manual & automatic detection of smoothpursuit in dynamic natural scenes. In: Proceedings of the European Conference ofEye Movements (2017)29. Tweed, D., Cadera, W., Vilis, T.: Computing three-dimensional eye positionquaternions and eye velocity from search coil signals. Vision research (1), 97–110(1990)30. Van Opstal, A., Van Gisbergen, J.: Skewness of saccadic velocity profiles: a unifyingparameter for normal and slow saccades. Vision research (5), 731–745 (1987)31. Wood, E., Baltrusaitis, T., Zhang, X., Sugano, Y., Robinson, P., Bulling, A.: Ren-dering of eyes for eye-shape registration and gaze estimation. In: Proceedings ofthe IEEE International Conference on Computer Vision. pp. 3756–3764 (2015)32. Yeo, S.H., Lesmana, M., Neog, D.R., Pai, D.K.: Eyecatch: simulating visuomotorcoordination for object interception. ACM Transactions on Graphics (TOG)31