Improving Aviation Safety using Synthetic Vision System integrated with Eye-tracking Devices
Mingliang Xu, Yibo Guo, Bailin Yang, Wei Chen, Pei Lv, Liwei Fan, Bin Zhou
IIEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. XX, NO. X, 2018 11 2 3 4 a r X i v : . [ c s . H C ] M a r EEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. XX, NO. X, 2018 2
Improving Aviation Safety using Synthetic VisionSystem integrated with Eye-tracking Devices
Mingliang Xu,Yibo Guo*, Bailin Yang, Wei Chen, Pei Lv, Liwei Fan, Bin Zhou
Abstract —. By collecting the data of eyeball movement of pilots,it is possible to monitor pilot’s operation in the future flight inorder to detect potential accidents. In this paper, we designed anovel SVS system that is integrated with an eye tracking device,and is able to achieve the following functions:1) A novel methodthat is able to learn from the eyeball movements of pilots andpreload or render the terrain data in various resolutions, inorder to improve the quality of terrain display by comprehendingthe interested regions of the pilot. 2) A warning mechanismthat may detect the risky operation via analyzing the aviationinformation from the SVS and the eyeball movement from theeye tracking device, in order to prevent the maloperations orhuman factor accidents. The user study and experiments showthat the proposed SVS-Eyetracking system works efficiently andis capable of avoiding potential risked caused by fatigue in theflight simulation.
Index Terms —Synthetic Vision Systems, eye-tracking, fatiguedetection, aviation risk warning.
I. I
NTRODUCTION
Synthetic Vision Systems (SVS) are designed to provide3D terrain images of the aircraft’s future routine, in orderto enhance the awareness of potential Controlled Flight intoTerrain (CFIT) accidents for the pilots[5][6][7]. The imagesare composed by a virtual environment of the outside terrainsand various symbols representing the relevant information thatmay impact the aircraft in the flight path, such as a real-timenavigation route, the buildings and water that are beyond thesight [7], [12], and the depiction of traffic in the airspace aheadof the pilot [13]. Through the virtual tunnel shown on the SVSdisplays, the pilots may detect the hidden obstacles before itis too late to react, or be aware of the potential risk of theiroperations that may endanger the flight.In recent times, with the development of eye trackingtechnology that has been widely used in the flight trainingsimulations, the original SVS techniques are also benefited viaintroducing the eye tracking devices into the real time aviationenvironment. In [15] and [1], eye-tracking data were collectedfrom pilots in order to investigate their concentration duringthe flight. On the other hand, by collecting the data of eyeballmovement of pilots, it is possible to monitor pilot’s operationin the future flight in order to detect potential accidents. In thispaper, we designed a novel SVS system that is integrated with
Mingliang Xu, Yibo Guo, Pei Lv and Binzhou are with the Schoolof Information Engineering, Zhengzhou University, China. E-mail: [email protected] Yang is with Zhejiang Gongshang University, Hangzhou, China.Wei Chen is with Zhejiang University, Hangzhou, China.Liwei Fan is with Luoyang Electrooptical Equipment Research Institute,Luoyang, China. an eye tracking device, and is able to achieve the followingfunctions: • A novel method that is able to learn from the eyeballmovements of pilots and preload or render the terrain datain various resolutions, in order to improve the quality ofterrain display by comprehending the interested regionsof the pilot. • A warning mechanism that may detect the risky operationvia analyzing the aviation information from the SVSand the eyeball movement from the eye tracking device,in order to prevent the maloperations or human factoraccidents.The novel SVS system integrated an eye tracking deviceinto an ordinary SVS system. A novel SVS-Eyetracking(S-E) architecture is proposed in this paper. Based on theproposed architecture, a novel algorithm of preloading andrendering the terrain data of multiple resolutions concerningthe eyeball movements of pilots is illustrated. The proposedalgorithm is able to transmit minimum terrain data during thedata preloading period and render the polygonal tessellationconsidering the real-time eyeball movements. According to thequestionnaires of user study, the proposed algorithm is capableof improving the driving experience of pilots.The warning mechanism of fatigue/maloperation detectionis another major feature of the proposed system. The fatiguedetection via eye tracking in vehicle driving has been widelyinvestigated by the community of smart transportation andhuman machine interface. However, the fatigue detection onan aircraft, especially the airline flights, is different from thedetections on ordinary vehicles. The aviating operations basedon the flight status are much more complicated than drivingthe vehicle on the road, and the maloperations are much moredifficult to be recognized. In this case, we propose a novelwarning mechanism based on the proposed SVS-Eyetrackingarchitecture that is able to detect the potential risky operationson the current flight status in a non-intrusive way with goodaccuracy. The novel mechanism may also perceive pilots’fatigue and prevent human-factor accidents by warning them.The paper is organized as follows: the related work of SVSsystems and eye tracking devices are in Section 2; the proposedS-E architecture is illustrated in Section 3; the optimal datapreload algorithm is proposed in Section 4; the new warningmechanism is illustrated in Section 5; the experiences anduser study are in Section 6; the conclusion of this paper isin Section 7.
EEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. XX, NO. X, 2018 3
II. R
ELATED WORK
A. Terrain Representation
The measuring and representing terrain textures has beenwidely studied during the last decades [7], [18], [27], [30],[31], [32], [8], [9], [14], [23], [4], [28], [29]. The terraingenerated on a SVS display device is a continuous meshof polygons stored in a binary tree. The general solutionof terrain LOD is using multi-resolution to control the datapre-loading and rendering process. There are many relatedsurveys and references in [7], [18], [27], [30], [31], [32].The terrain data are updated in real time, and the amountof pre-loaded data is incrementally changed according to theheight and velocity of the aircraft. The first applied methodof dealing with massive terrain data is the restricted quadtreetriangulation (RQT) algorithm. In [13], the method of trianglestripping cost models is presented. However, the earlier terraintriangulation techniques depended on greedy algorithms donot support realtime or view-dependent scenarios [15], [21].In [12], the data meshes are subdivided, and the problem ofirregular meshes are converted into smoother surfaces. Basedon these former works, the method of wavelet analysis toterrain LOD are proposed in [24]. This method supports goodcoherence for a movable and overlooking point of view, butif does not gurarantee the error bounds on fine-grained vertexdeletion.The recent researches on the terrain visualization are fo-cused on dealing with the hierarchical triangulated meshes.In article [18], the method of triangle-bintree mesh is chosen.The mesh of their method is very like the same mesh in ourmethod. The simple bintree structure is not recognized norsupported the split and merge operations. Therefore, specialcare must be taken to handle the continuity and coherence ofmaintain meshes. Meanwhile, with the requirement of betterdisplay performance, the frame rate is increased tremendously,which greatly increased the amount of pre-loaded data meshes.In [22], [14], a hierarchical triangulated-irregular network(TIN) data structure with near/far annotations for vertex mor-phing is described. These methods also include a queue-driventop-down refinement procedure for building the triangle meshfor a frame. However, The first version of TIN does notconsider the automatic morphing procedures, and the memoryrequirements are still high for each multi-resolution meshes.The better version of a quadtree sturcture is proposed in[12], which preprocesses the height of the meshes in unifiedgrids. The pre-processing phase computes the vertices at eachquadree level, and the vertices are fitted to an approximateleast-squares of the below level. The method also applies apriority queue in order to refine the quadtree from top levelsto the root levels. The disadvantage of the proposed methodis that it does not consider frame-to-frame coherence, whileonly one type of error metric is applied in the structure.The recent development of fine-grained LOD representationtechniques can handle the ordinary TIN data meshes[17],[2], [19], [20]. The new methods are able to preprocess theprogressive mesh representation based on view dependentrefinement, and the algorithms are allowed error metrics whichare designed for flexible point of view. Moreover, detail reduc- tion based on nested Gauss-map normal bounds are shown intheir works. In [24], thin triangles are removed so that the errorrate can be significantly reduced. New metrics for avoidingedge-collapse operations are proposed in [21], however, theheuristic estimation cannot justify the geometric distortionswell. In [25], [27], a novel feedback technique is proposedin order to process the rough rate regulations. However, theframe-to-frame consistency are not fully considered and thetime complexity and cost are still depending on the size ofmeshes.
B. Fatigue Detection
Fatigue detection via tracking the driver’s eye has beenproved to be an efficient method[26]. The expressions on theface such as eyelid motions, yawning, staring, or gaze arevisual cues of fatigue. Eye tracking devices are installed inorder to accumulate the eyeball movement information of thepilots. According to researches [9], [23], most of the fatigues-realated information can be obtained from the driver’s eyeballmovements. There are already plenty of metrics of fatigues re-lated to eyeball movement information being developed. Manycommercial fatigue detection systems have been developed forvehicles, such as AntiSleep developed by Smart Eye AB andDriver State Sensor (DSS) developed by Seeing Machines[23].By analyzing the timely records of the eyeball movements in ashort time period, as well as the driving information collectedfrom the SVS system, the fatigue status of the pilots can beeasily justified. III. S
YSTEM O VERVIEW
The integrated SVS-Eyetracking system architecture con-tains two modules: the SVS module, including the aviationstatus analyzer, the terrain data server, the display devices, andetc; the eye tracking module, including the eyeball movementtracking device, the human status analyzer, and the warningdevice. The architecture of the system is shown in fig 1.The SVS module of the system contains the componentsof an ordinary SVS system. According to NASA SVS con-cept(2009), the ordinary SVS systems are composed of thesensors and database servers, the embedded computationserver, and the display devices. The sensors and databaseservers include the weather radar, the radar altimeter, for-ward looking infrared sensor, and on-board synthetic visiondatabases. The embedded computation server analyzes thestatus of the aircraft in real time. It is also responsible forthe image object detection and fusion, image enhancement,and terrain data rendering. The display devices are head-mounted or screen-based. In our proposed architecture, severalnew components are added to the embedded computationservers and the terrain data servers. New embedded hardwarechips and software interfaces are installed in the ordinarySVS system in order to communicated with the eye trackingmodule.The eye tracking device records three kinds of eyeballmovements of the pilots: fixations, saccades, and pursuitmovements. It provides the trail of the gazing spots on theterrain image to the SVS computation server, and by analyzing
EEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. XX, NO. X, 2018 4
Fig. 1. Synthetic Vision Systems Integrated with eye tracking device. It preserves the original SVS functions and supports customized multi-resolution basedon eye tracking as well as fatigue detection mechanism. it with the flight status information from the SVS module,predicts the high resolution area of the terrain on the screen.On the other hand, by comparing the eyeball movements ofthe previous records with the present ones, the warning devicemay find out whether pilots are in fatigue, and remind themwhen the SVS module indicates the flight is in abnormal status.
Fig. 2. The SVS-Eyetracking Architecture
IV. D
ATA P RELOADING AND RENDERING A LGORITHMS
In this section, we present a data preload algorithm that isdesigned to preload and render minimum terrain data fromterrain data server to the computation server according to theeye-tracking information of the pilot. The procedure of thewhole system is shown in figure 2.
Fig. 3. Procedure of the SVS and Eyetracking modules
A. Terrain Data Structure
We propose a tree structure for the terrain data in order tointegrate various types of geometry meshes into one hierarchi-cal model. The improved terrain data structure is capable ofemphasizing the interested area of the pilot.First, we assume P = { p , p , ..., p m } be the data point setswithin the xy -plane. Suppose for each data set P , the boundingbox of P is within the area surrounded by the extremal points a min and a max . The domain of P is defined as D ( P ) = { ( x, y ) | x a min ≤ x ≤ x a max , y a min ≤ y ≤ y a max } .Let Q := { q u,v | ≤ u ≤ m ; 1 ≤ v ≤ n } be a texture meshwhich consists terrain patches q u,v of m × n . The extremalpoints of the axis parallel bounding box are denoted by b min and b max . The domain D ( Q ) of the texture Q is defined by D ( Q ) := { ( x, y ) | x b min ≤ x ≤ x b max ; y b min ≤ y ≤ y b max } . EEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. XX, NO. X, 2018 5
We define a geometry patch in a tree structure as T s,n ( P ) for a terrain data set P , where there are at most s points ofterrain data within the boundary, and n child nodes in the treestructure. Each child node N represents a rectangular region D N ( P N ) ⊂ D ( P ) . The bounding box of P N is the domain D N . The way of calculating the exact number of points foreach node depends on the data preload strategy.For every node N , there are at most n d child nodescontained. We construct the child nodes as follows: Let D N be the terrain data of a node N of which the domain D N ⊂ D ( T ) . The geometric approximation error e D ( N ) aredefined as the maximal vertical distance between the terrainsD and D N , i.e. [5], e T ( N ) := max p ∈ D N | h D ( p ) − h D N ( p ) | .For each subdomain D N i , a node N i will be constructedwhich approximates the terrain in that subdomain, and willbe added as child node to the parent node N. If the errorbetween T and T N exceeds a certain threshold α ≥ , i.e., e T ( N ) > α , then the domain of N is subdivided into a setof at most d rectangular subdomains such that D N = ∪ D N i with D N i ∩ D N j = (cid:11) , i (cid:54) = j, ≤ i, j ≤ d . The domain of theroot node of the tree is D ( T ) covering the whole domain ofthe terrain T. B. Data Preloading Algorithm
In this section, we introduce the data preload algorithm thatis utilized in the proposed integrated SVS system. Althoughthe modern graphics stations may be capable of renderingthousands of shaded or textured polygons at interactive rates,the geometry complexity of the terrain data is still far moreexceeded the capability of computation servers on the aircraft.Therefore, many algorithms of Level of Detailed (LOD) ren-dering have been proposed in the previous work of large terraindisplay on airbone SVS systems. The bottleneck of ordinarySVS systems is that the terrain data are too large to be stored inthe computation servers. Only the data that may be requiredin the future will be loaded from the terrain data server tothe computation server. In order to minimize the unnecessarytransmitted data, we propose the data preload algorithm thatis able to select the least required data to be preloaded.The data preload algorithm considers the navigation infor-mation, the aircraft status, the operation of the pilot, and theimportant spots by tracking the eyeball movement of the pilot.In ordinary SVS systems, the terrain data are stored in aformat of hierarchical bintree or quadtree. The vertexes of eachtriangle/rectangle are arranged in layers. Generally, when theSVS module needs to render the current view of the terrain,it renders the terrain near the flight in high resolutions, andthe distant view in low resolutions. Since the capability of thecomputation server and the terrain data server is limited, ateach time step, the computation server only loads the necessaryterrain data from the terrain data server.The navigation information of the aircraft is obtained fromthe navigation server of the SVS module. The navigation infor-mation contains the route from the source to the destination.The computation server only preload the terrain data in thesight scope of the route.The flight status of the aircraft mainly consists of three kindof information: the altitude, the vocality, and the direction of
Fig. 4. Preloaded data according to the fusum the flight. In Figure 4, the area of preloaded terrain data ofcurrent time step is illustrated. We suppose that left angleand right angle to the forwarding direction is δ l and δ r . d is the extended terrain area of sight. In order to simplify thecomputation, we suppose the aircraft is the origin of axis,the aircraft heading is the Y axis, the vector connected thecenter of the earth and the aircraft is the Z axis, and the vectorperpendicular to the YOZ plane is the X axis. The height of theaircraft is h . The velocity of the aircraft is v . At the initiation,the preloaded terrain area d = δ ( h ) . δ l and δ r are defaultedangle that represent the possible area of sight. δ l = δ r = δ .The preloaded terrain area is S = d tanδ . The real timeextended area of sight d and the angles δ l and δ r are computedin formulation 1. d = θ ( h + v × δt × cosθ ) σ l = σ + η ( v (cid:48) sinα ) σ r = σ − η ( v (cid:48) sinα ) (1)In formulation 1, θ is the angle between the velocity vectorand the Z axis. The real-time preloaded terrain area is S = d ( tanδ l + tanδ r ) / . For each frame in the time step, T S v isdefined as the geometry patch of this terrain area S at time v .Based on the tree structure defined previously, we preload thenodes of T S v for each frame. C. Marking Interested Areas based on Eye Tracking
The greedy data preload algorithm is capable of minimizethe amount of unnecessary terrain data transmission betweenthe terrain data server and the computation server. However,based on the survey of pilots encountering the abnormal eventsduring the flight [15], each pilot requires different spots onthe screen for higher resolution. Since there is no obviousway of satisfying all pilots’ needs, we propose a method thatdiscovering the important spots via the eyeball movements ofpilot.The novel method consists of two steps. First, the flightroute and the eyeball movements of the pilot are recordedduring the whole trip with time stamps. Each eyeball move-ment is re-mapped to the terrain data of the route by the timestamps.According to the study of [16], the interested area can beacquired by learning the trial of eyeball movements. Since weuse the time stamp to re-map the eyeball movements and the
EEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. XX, NO. X, 2018 6 terrain vertexes, it is easy to find out the spots that the pilotsgazed or become watchful in the route. It is also possible touse HMM model or other machine learning method to findout the interested area in a more precise way.After the interested spots are located, the terrain data serverpreload these data to the computation server in the next flighttrip. Each time, the interested area are alternated by the lastrecord of the same trip. The mechanism may maintain alimited list of spots descendent by priority. The spots of leastpriority are removed from the list, and new spots of higherpriority are added into the list. The whole algorithm is presentin Algorithm 4.1.
Algorithm IV.1
The Data Preload Algorithm
Require:
The terrain data points of trip P , the record of eyeball movementduring the trip E P , the priority list of previous time step t − L t − ,the time stamps of the terrain data in the route S r , the time stamp of theeyeball movement during the trip S e Ensure:
A new priority list of spots L t .1: Remap E t to T based on S r and S e .2: Detect interested area of vertexes by the gazing time or the glance time.The interested spots are given a priority based on the attention time ofthe pilot.3: Update the priority list L t − by comparing the priority of new spots tothe ones in the list. The spots of least priority are removed from the list.4: Generate the new priority list L t by moving the top N spots of L t − tothe L t .Fig. 5. The historical interested area (in red circles) of the pilots are displayedwith higher resolutions.Fig. 6. The rendering result of the terrain area V. F
ATIGUE D ETECTION AND W ARNING M ECHANISM
In this section, we introduce the novel warning mechanismbased on eye tracking. The warning mechanism, named as EyeTracking Fatigue Alert (ETFA), is illustrated. The Proposed Algorithm can be divided into two steps. First, a fatigue de-tection mechanism is generated. Second, the fatigue generatingmechanism is integrated with the SVS system.In our implementation, every tenth frame from the eyetracking video is processed. We utilize the important spots thatare acquired in the optimal data preload algorithm to activatethe warning mechanism. When the flight is within the warningrange of the spot, the fatigue alarm is activated. As long asthe fatigue status of the pilot is above the normal status, analert will be sent to the pilot.When the system starts, frames are continuously fed fromthe camera to the eye tracking analyzer. We use the initialframe in order to localize the eye-positions. Once the eyes arelocalized, we start the tracking process by using informationin previous frames in order to achieve localization in subse-quent frames. During the tracking process, error detection isperformed in order to recover from possible tracking failure.When a tracking failure is detected, the eyes are relocalized.The fatigue detection and warning mechanism are activatedduring the flight simultaneously. When preloading the terraindata, the SVS system detects the potential risks on the terrainmap, like high mountains, towers, waters, or other staticobstacles. The sensors of the SVS system detects the flightstatus in real time in order to analyze the current risk level ofthe flight. If the pilots are in fatigue when the flight is nearthe risky terrain area, or the risk level of the flight is too high,the warning device will send signals immediately.VI. E
XPERIMENTS AND U SER S TUDY
The evaluations of our S-E system are conducted in threeparts. First, we invited pilots using our S-E system in an avi-ating simulation environment in order to evaluate the displayperformance of the system. Second, we invited twenty pilotsin 8 experimental scenarios to test the accuracy of fatiguedetection. Third, we conduct a user study for the pilots forthe evaluation of the improvement of driving experience. Aquestionnaire is given to the pilots who drive a long distancesimulation using the proposed S-E system.
A. Performance Evaluation
The pilots’s eyeball movement is recorded at the first timewhen they are participating the simulation. The multiresolutiondisplay performance is shown in figure 8.The simulation is an one hour flight period, including thetaking off period, the smooth flight period in the middle,and the landing period. We demonstrate the data transmissionamount between the terrain data server and the computationserver during the simulation in figure 9. In the table of thefigure 9, the X axis is the time line and the Y axis is the amountof data transmitted from the data server to the computationserver. The black line at the above is the amount of datatransmitted without data prediction algorithm. The histogramis the amount of data transmitted utilizing the proposed dataprediction algorithm. The “Taking off” title shows the timeperiod of aircraft rising from the airport to a certain height.The “Smooth flight in the middle” title shows the time period
EEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. XX, NO. X, 2018 7
Fig. 7. Data throughput of the simulationNo. of S-E Total No. AccuracyScenario 1 55 97 56.7%Scenario 2 68 90 75.6%Scenario 3 34 37 91.9%Scenario 4 60 75 80%Scenario 5 23 24 95.8%Scenario 6 17 17 100%Scenario 7 25 28 89.3%Scenario 8 9 10 90%TABLE IE
XPERIMENTAL RESULTS OF SCENARIOS that the aircraft cruises at certain altitude. The “Landing” titleshows the time period of the aircraft descends to the ground.The evaluation shows that, without an optimized datapreload algorithm, the throughput of the network and thecomputation load on the servers are remaining at a high statusduring the whole simulation. After utilizing the data predictionalgorithm, the data transmission amount is reduced to anacceptable degree.
B. Fatigue Detection Evaluation
Twenty participants are invited for this evaluation. Theexperiment was conducted in a cockpit equipped with theproposed integrated SVS-Eyetracking system.Participants flew 8 experimental scenarios of 80-100 min-utes each, involving a curved step-down approach through aterrain challenged region to a simulated airport. Pilots wereinstructed to use the eye tracking device during the wholeexperiment period. While flying, pilots were instructed todetect any important/interest terrain spot that became visibleon the SVS display, and also to report any changes to trafficaltitude that they noticed on either the SVS display or thenavigation display.During the evaluation, some accidents are designed to occurat random time. If the pilot failed to react, the simulation willjudge the pilot was at fatigue status. The eye tracking devicealso evaluates the fatigue degree of the pilot in real time. Inall experimental scenarios, the eye tracking system did notwarn the pilots when it finds out the pilots are in fatigue. Theexperimental results are shown in Table 1.The first four scenarios are comparable challenging, andthe possibilities of the occurring of risky scenarios are high.However, since the eyetracking device only record the fatigue warning signals, the number of fatigue detected by the S-E system is always smaller than the number of fatigue thesimulation records. The experimental results of the last fourscenarios shows that the number of fatigue that the S-Esystem detected are very similar to the number of fatigue thesimulation detected.
Questions ratings from 1 to 5,5 is the score for perfect experience
Is the SVS system showing the terrain consistent withthe simulation environment?Is the SVS system correctly showing the symbols ofparticular spots on the map?Is there any delay on the display screen of SVS systemduring the simulation?Is there any ambiguous symbols on the screen?Is there any display error occurred duringthe simulation?How intrusive when you drive with the eyetracking device?Is there any discomfort (e.g. dazzle) causedby the system during the simulation?Is the warning signal noticeable?Is there any potential risk are noticed by the alert?Is the fatigue detection accurate accordingto the scenario?Is the eye tracking device working correctly duringthe simulation?Is the interested area on the screen has betterresolution?Is the interested area on the screen are thespots you have been paying attention duringlast simulation?hline Is there any fake alert during the simulation?Is the terrain warned on the screen displays correctly?Is the fatigue detection helpful for avoidingpotential risks?
TABLE IIR
ATING QUESTIONS FOR DRIVING EXPERIENCE
C. User Study for the Driving Experience
The study in [23] categorized the human factors issues re-lated to SVS systems into three research areas: Image quality,information integration, and operational concepts. Based ontheir study, a questionnaire is given to the participants for theevaluation of their driving experience using the proposed S-Esystem.The questionnaire consists up to thirty questions, whichcontains fifteen rating questions for the user’s evaluationof their driving experience, and fifteen questions for theiropinions of system setting and suggestions. The evaluationquestions includes the evaluation of the performance of theSVS system, the comfort of using the system, the accuracyof fatigue detection and warning signal, and other questionrelated to human factor. The questions of setting and opinionsinquired the setting parameters of the system and suggestionsof the participants. The questionnaire is shown in Table II andTable III.Most of the pilots show that the system works smoothlywith nearly no error on the display screen. Some participantsindicates that the symbol of certain terrain spots are too simple,
EEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. XX, NO. X, 2018 8 and expected us to update the textures of building and otherobstacles. Some participants reported some fake alarms duringthe flight, but they also indicated that they didn’t cause anymaloperations. The overall feedback indicates the S-E systemworks efficiently and is helpful for avoiding the potential risksduring the simulation.The suggestions of the participants are carefully collectedand the system setting is altered according to their opinions.The suggestions also indicates that the S-E system may requiremore consideration of human factors in the future.VII. C
ONCLUSION
In this paper, we propose an SVS-Eyetracking architecturewhich integrated the eye tracking device into an SVS system.Based on the proposed architecture, we proposed a data predic-tion algorithm which reduces the amount of data transmittedfrom the terrain data server to the computation server. Thedata prediction algorithm considers the flight status providedby the SVS system and the eyeball movement of the pilot.The evaluation shows that the proposed algorithm may reducethe data transmitted significantly. We also implemented anfatigue warning mechanism in the proposed S-E system. Theevaluation shows that the S-E system works efficiently and iscapable of avoiding potential risked caused by fatigue in theflight simulation. A
CKNOWLEDGMENT
The authors would like to thank...R
EFERENCES[1] K. Baumann, J. Doellner, K. Hinrichs, and O. Kersting. A hybrid,hierarchical data structure for real-time terrain visualization. In
ComputerGraphics International, 1999. Proceedings , pp. 85–92, 1999.[2] D. B. Beringer. Development of highway-in-the-sky displays for flightc path guidance: History, performance results, guidelines. In
HumanFactors & Ergonomics Society Annual Meeting Proceedings , 44(13):21–24, 2000.[3] O. Bimber, F. Coriand, A. Kleppe, E. Bruns, S. Zollmann, and T. Lan-glotz. Superimposing pictorial artwork with projected imagery. In
ACMSIGGRAPH 2004 Sketches , p. 78, 2004.[4] C.L. Philip Chen and C. Zhang, Data-intensive applications, challenges,techniques and technologies: A survey on Big Data. In
InformationSciences , (275):314–347, 2014.[5] H. P. Chiu, A. Das, P. Miller, S. Samarasekera, and R. Kumar. Precisevision-aided aerial navigation. In
Ieee/rsj International Conference onIntelligent Robots and Systems , pp. 688–695, 2014.[6] P. Cignoni, F. Ganovelli, E. Gobbetti, F. Marton, F. Ponchio, andR. Scopigno. Bdam — batched dynamic adaptive meshes for highperformance terrain visualization. In
Computer Graphics Forum , pp. 505–514, 2003.[7] X. F. Dai, H. J. Xiong, and X. W. Zheng.
A New Three-DimensionalSpherical Terrain Rendering Method Based on Network Environment .Springer Berlin Heidelberg, 2012.[8] E. Yusov and M. Shevtsov. G. Mcintyre and K. Hintz ComprehensiveApproach to Sensor Management, Part I: A Survey of Modern SensorManagement Systems.
IEEE Transactions on Smc , 1999.[9] R. M. Haralick, K. Shanmugam and I. Dinstein. Textural Featuresfor Image Classification.
IEEE Transactions on Systems, Man, andCybernetics , SMC-3(6):610–621, 1973.[10] T. O. Olasupo and C. E. Otero. The Impacts of Node Orientation onRadio Propagation Models for Airborne-Deployed Sensor Networks inLarge-Scale Tree Vegetation Terrains.
IEEE Transactions on Systems,Man, and Cybernetics: Systems , PP(99):1–14, 2017.[11] J. S. Weszka, C. R. Dyer and A. Rosenfeld. A Comparative Studyof Texture Measures for Terrain Classification.
IEEE Transactions onSystems, Man, and Cybernetics , SMC-6(4):269–285, 1976. [12] M. Duchaineau, M. Wolinsky, D. E. Sigeti, M. C. Miller, C. Aldrich, andM. B. Mineevweinstein. Roaming terrain: Real-time optimally adaptingmeshes. In
Visualization ’97., Proceedings , pp. 81 – 88, 1997.[13] H. Hoppe. Smooth view-dependent level-of-detail control and itsapplication to terrain rendering. In
Visualization ’98. Proceedings , pp.35–42, 1998.[14] L. J. P. III, J. J. Raymond Comstock, L. J. Glaab, L. J. Kramer, J. J.Arthur, and J. S. Barry. The efficacy of head-down and head-up syntheticvision display concepts for retro- and forward-fit of commercial aircraft.
International Journal of Aviation Psychology , 14(1):53–77, 2004.[15] O. Koichi and N. Tomoyuki. An interactive deformation system forgranular material. pp. 51–60, 2005.[16] L. J. Kramer, J. J. A. Iii, R. E. Bailey, and L. J. P. Iii.
Flight testing anintegrated synthetic vision system . 2005.[17] X. Y. Li, M. Li, and W. Cai. The research of 3d terrain generation andinteractivity realization techniques in virtual battlefield environment. In
International Conference on Image & Graphics , pp. 668–671, 2009.[18] Y. Liu, H. Chang, and S. L. Dai. Large scale terrain tessellation inflight simulator visual system. In
Guidance, Navigation and ControlConference , pp. 1028–1033, 2014.[19] J. Luo, G. Hu, and G. Ni. Dual-space ray casting for height fieldrendering.
Computer Animation & Virtual Worlds , 25(1):45C56, 2014.[20] A. I. Mourikis and S. I. Roumeliotis. A multi-state constraint kalmanfilter for vision-aided inertial navigation. 22:3565–3572, 2007.[21] R. Pajarola. Large scale terrain visualization using the restricted quadtreetriangulation. In
Visualization ’98. Proceedings , pp. 19–26, 1998.[22] R. Pajarola and E. Gobbetti. Survey of semi-regular multiresolutionmodels for interactive terrain rendering.
Visual Computer , 23(8):583–605, 2007.[23] T. Schnell, Y. Kwon, S. Merchant, and T. Etherington. Improvedflight technical performance in flight decks equipped with syntheticvision information system displays.
International Journal of AviationPsychology , 14(1):79–102, 2009.[24] M. Sebastian, D. Patrick, W. Rol, and K. Reinhard. Context aware terrainvisualization for wayfinding and navigation.
Computer Graphics Forum ,27(7):1853–1860, 2008.[25] T. Wiesemann, J. Schiefele, and W. Kubbat. Multi-resolution terraindepiction on an embedded 2d/3d synthetic vision system.
AerospaceScience & Technology , 9(9):517–524, 2005.[26] D. M. Williams, M. C. Waller, J. H. Koelling, D. W. Burdette, W. R.Capron, J. S. Barry, R. B. Gifford, and T. M. Doyle. Concept of operationsfor commercial and business aircraft synthetic vision systems.
AmericanInstitute of Aeronautics & Astronautics , 2001.[27] E. Yusov and M. Shevtsov. High-performance terrain rendering usinghardware tessellation.
Journal of Wscg , 19(1):85–92, 2011.[28] M. Xu, M. Li, W. Xu, Zhi. Deng, Y. Yang, K. Zhou. InteractiveMechanism Modeling from Multi-view Images.
ACM Transactions onGraphics , 35(6): Article 236, 2016.[29] M. Xu, J. Zhu, P. Lv, B. Zhou, M. Tappen, R. Ji. Learning-based ShadowRecognition and Removal from Monochromatic Natural Images.
IEEETransactions on Image Processing , 26(12):5811-5824, 2017.[30] J. Niu, J. Lua, M. Xu*, P. Lv, X. Zhao. Robust Lane Detection usingTwo-stage Feature Extraction with Curve Fitting.
Pattern Recognition ,59: 225-233, 2016.[31] M. Xu, N. Gu, W. Xu, M. Li, J. Xue, B. Zhou. Mechanical AssemblyPacking Problem Using Joint Constraints.
Journal of Computer Scienceand Technology , 32(6): 1162C1171, 2017.[32] M. Xu, P. Lv, M. Li, H. Fang, H. Zhao, B. Zhou, Y. Lin, L. Zhou.Medical image denoising by parallel non-local means.