Hostile Intent Identification by Movement Pattern Analysis: Using Artificial Neural Networks
HHostile Intent Identification by Movement Pattern Analysis: Using Artificial Neural Networks
Souham Biswas
J.K. Institute of Applied Physics & Technology University of Allahabad Allahabad, India [email protected]
Manisha J. Nene
Dept. of Applied Mathematics and Computer Engineering Defence Institute of Advanced Technology, Defence R&D Organization, Ministry of Defence Pune, India [email protected]
Abstract —In the recent years, the problem of identifying suspicious behavior has gained importance and identifying this behavior using computational systems and autonomous algorithms is highly desirable in a tactical scenario. So far, the solutions have been primarily manual which elicit human observation of entities to discern the hostility of the situation. To cater to this problem statement, a number of fully automated and partially automated solutions exist. But, these solutions lack the capability of learning from experiences and work in conjunction with human supervision which is extremely prone to error. In this paper, a generalized methodology to predict the hostility of a given object based on its movement patterns is proposed which has the ability to learn and is based upon the mechanism of humans of “learning from experiences”. The methodology so proposed has been implemented in a computer simulation. The results show that the posited methodology has the potential to be applied in real world tactical scenarios.
Keywords—Hostility; Neural Networks; Artificial Intelligence; Defence; Maritime I. I NTRODUCTION
One of the most daunting tasks pertaining to the defence forces of a country has always been the effective identification and elimination of threats which interfere with the interests or the security of the nation. In situations of conflict, the fallibility of human judgement caused by stress or any such factor in determining the hostility of a given target can prove to be fatal and might result in considerable loss of resources. Therefore, a need to automate the same is highly desirable. The term “hostility” is inherently multifarious. The meaning depends upon the observer. This implies that, there are a considerable number of variables pertaining to this characteristic, if viewed analytically. Every day, new attack techniques are observed and defence tactics are being innovated. It is impossible to define an all-encompassing set of parameters or variables which would successfully quantify “hostility” in a general sense. However, there is one parameter of hostility pertaining to the object in question that spans over the others and is potentially impervious to the nature of the observer; the location/existence of the object under observation. The location of a given object can be grilled to obtain a multitude of characteristics from which certain behavioural traits can be extracted and analysed. Although, a perfect analytical solution to this problem statement is far- fetched as of now, a more promising approach is to incorporate the way humans try to solve this problem into a machine; to include the element of “intuition”. The approach posited here draws on the fact that the human mind is a continuously evolving palimpsest of neurons, which adapt and evolve to any situation. Hence, the deployment of artificial neural networks; even more for their reputation for being extremely fault tolerant which is of paramount importance when considering such problems with high margins for error.
Background and related work:
During the literature survey, it was observed that a small set of analytical approaches do exist [1], [5], which address this problem. The most prominent of which, is the US Patent termed “Detection of Hostile Intent from Movement Patterns” [1]. But, what these approaches lack is the trait to adapt according to situation. It is not possible to discretely classify a given behaviour as “hostile” or “not hostile”. Therefore, it is imperative that the system intended to make that classification, learn how to do so by itself and adapt in pace with the constantly evolving attack/defence tactics. Presented in this paper is an approach which seeks to make for the shortcomings faced by the present systems. The paper is organized as follows: Section II describes the various parameters and assumptions involved and basis of the methodology. Section III enumerates the actual methodology, critical parameters, algorithms and the processes involved. In Section IV, simulation results have been elucidated. Finally, Section V summarises and concludes the proposed work and mentions the scope for future work. II. P ROPOSED W ORK A. Basis of Hostility Detection
One of the most prominent characteristics of the human brain is the tendency to correlate between new information and previous “experiences” to draw conclusions [7]. The degree of this correlation and the subsequent processing of the same allow us to make fuzzy predictions [6] or in one way, form an intuition . The notion of hostility in general, lies in previous experiences of such situations endured by an individual. When presented with a scenario for hostility detection, the brain tries to discern the degree of similarity between the new situation and a catalogue of “hostile” labelled situations previously ncountered. A high similarity calls for evasive measures. To summarize, one takes steps to ensure that the sequence of events which led to the previously sustained events of hostility do not repeat. To model this as an automated solution, we consider neural networks. We follow a similar process of training the network that is; an expansive dataset of known hostile situations is made incident on the network. As the training proceeds, the network tends to form its own notions for enumerating hostilities. In other words, an artificial sense of intuition is formed. B. Assumptions & Parameters Involved
To parameterize a given situation for neural network training [4], we consider the locations of the objects. This quantity maybe in parametric, polar or any other co-ordinate form. The definitions of a few prominent terminologies are given below- • Area of Observation:
It is the physical region which is being monitored for hostile activities. Practically, this can translate to a given region on the shore, the range of a radar, etc. • Object:
Any entity inside the area of observation which will be subject to probation for determination of hostility is termed as “object”. • Hostility:
It is the probability that an object will commit an act of hostility in the immediate future. Other parameters derived from the locations of objects like the speed and direction of object, density of objects in a given area etc. may also be computed and added as inputs to the neural network. The shape of the area of observation does not pose any constraint in location determination. III. M ETHODOLOGY
The system will take inputs as the locations of the multiple objects inside the area of observation in the form of X and Y coordinates. The neural network being utilized will be a 2-layer feed forward network [8] with sigmoid function (1) as the activation function. (cid:1858) (cid:3046)(cid:3036)(cid:3034) (cid:4666)(cid:1876)(cid:4667) (cid:3404) (cid:2869)(cid:2869)(cid:2878)(cid:3032) (cid:3127)(cid:3299) . (1)
The datasets involved in training are of the following types- • Raw Dataset – This contains the records of locations of all the objects in the area of observation and their corresponding probabilities of hostility. • Normalized Dataset –
This is the dataset which is actually used to train the neural network. Normalized Dataset is obtained by generating all the permutations of the raw dataset.
System Variables and Relations- • (cid:1840) : Number of objects inside the area of observation. • (cid:1839) (cid:3038) : Number of entries in raw dataset having dataset index “ (cid:1863) ” ( (cid:1863) (cid:3047)(cid:3035) raw dataset). • (cid:1837) : Number of training datasets. • (cid:1839) (cid:4593)(cid:3038) : Number of entries in normalized dataset having dataset index “ (cid:1863) ” ( (cid:1863) (cid:3047)(cid:3035) raw dataset). • (cid:1850) (cid:3049)(cid:3048) (cid:3286) : X coordinate of object having index “ (cid:1874) ” in (cid:1863) (cid:3047)(cid:3035) raw dataset at observation index “ (cid:1873) ”. • (cid:1850) (cid:4593)(cid:3049)(cid:3048) (cid:3286) : X coordinate of object having index “ (cid:1874) ” in (cid:1863) (cid:3047)(cid:3035) normalized dataset at observation index “ (cid:1873) ”. • (cid:1851) (cid:3049)(cid:3048) (cid:3286) : Y coordinate of object having index “ (cid:1874) ” in (cid:1863) (cid:3047)(cid:3035) raw dataset at observation index “ (cid:1873) ”. • (cid:1851) (cid:4593)(cid:3049)(cid:3048) (cid:3286) : Y coordinate of object having index “ (cid:1874) ” in (cid:1863) (cid:3047)(cid:3035) normalized dataset at observation index “ (cid:1873) ”. • Ω (cid:3049)(cid:3048) (cid:3286) : Probability of hostility of object having index “ (cid:1874) ” in (cid:1863) (cid:3047)(cid:3035) raw dataset at observation index “ (cid:1873) ”. • Ω (cid:4593)(cid:3049)(cid:3048) (cid:3286) : Probability of hostility of object having index “ (cid:1874) ” in (cid:1863) (cid:3047)(cid:3035) normalized dataset at observation index “ (cid:1873) ”. • (cid:1842) (cid:3049)(cid:3048) (cid:3286) : This denotes the location of object having index “ (cid:1874) ” at observation index “ (cid:1873) ” at the (cid:1863) (cid:3047)(cid:3035) raw dataset index. • (cid:1842) (cid:4593)(cid:3049)(cid:3048) (cid:3286) : This denotes the location of object having index “ (cid:1874) ” at observation index “ (cid:1873) ” at the (cid:1863) (cid:3047)(cid:3035) normalized dataset index. • (cid:1827) (cid:3038)(cid:3048) : Locations of all objects (sets of X-Y coordinates) in (cid:1863) (cid:3047)(cid:3035) raw dataset at observation index “ (cid:1873) ”. • (cid:1827) (cid:4593)(cid:3038)(cid:3048) : Locations of all objects (sets of X-Y coordinates) in (cid:1863) (cid:3047)(cid:3035) normalized dataset at observation index “ (cid:1873) ”. • (cid:1828) (cid:3038)(cid:3048) : Probabilities of hostility of all objects (sets of Ω (cid:3049)(cid:3048) (cid:3286) values) in (cid:1863) (cid:3047)(cid:3035) raw dataset at observation index “ (cid:1873) ”. • (cid:1828) (cid:4593)(cid:3038)(cid:3048) : Probabilities of hostility of all objects (sets of Ω (cid:4593)(cid:3049)(cid:3048) (cid:3286) values) in normalized dataset at (cid:1863) (cid:3047)(cid:3035) observation index “ (cid:1873) ”. • (cid:1846) (cid:3038)(cid:3048) : Raw training data having observation index “ (cid:1873) ” at (cid:1863) (cid:3047)(cid:3035) training dataset. • (cid:1846) (cid:4593)(cid:3038)(cid:3048) : Normalized training data having observation index “ (cid:1873) ” at (cid:1863) (cid:3047)(cid:3035) training dataset. • (cid:1830) (cid:3045) : Raw training dataset. • (cid:1830) (cid:3041) : Normalized training dataset. • (cid:1843) (cid:3038) : (cid:1863) (cid:3047)(cid:3035) raw dataset. • (cid:1843) (cid:4593)(cid:3038) : (cid:1863) (cid:3047)(cid:3035) normalized dataset. • (cid:1838) (cid:3038) : Dataset containing location data of all objects in the area of observation with raw dataset index “ (cid:1863) ” ( (cid:1863) (cid:3047)(cid:3035) raw dataset). • (cid:1834) (cid:3038) : Dataset containing hostility probability data of all objects in the area of observation with raw dataset index “ (cid:1863) ” ( (cid:1863) (cid:3047)(cid:3035) raw dataset). (cid:1838) (cid:4593)(cid:3038) : Dataset containing location data of all objects in the area of observation with normalized dataset index “ (cid:1863) ” ( (cid:1863) (cid:3047)(cid:3035) normalized dataset). • (cid:1834) (cid:4593)(cid:3038) : Dataset containing hostility probability data of all objects in the area of observation with normalized dataset index “ (cid:1863) ” ( (cid:1863) (cid:3047)(cid:3035) normalized dataset). Object index is a number assigned to each of the objects inside the area of observation for uniquely identifying them.
Relations –
The mathematical relations between the variables mentioned previously are enumerated as follows. (cid:1842) (cid:3049)(cid:3048) (cid:3286) (cid:3404) (cid:4676)(cid:1850) (cid:3049)(cid:3048) (cid:3286) , (cid:1851) (cid:3049)(cid:3048) (cid:3286) (cid:4677) (2) (cid:1827) (cid:3038)(cid:3048) (cid:3404) (cid:4676)(cid:1842) (cid:3049)(cid:3048) (cid:3286) (cid:1527) (cid:1874) (cid:1488) (cid:4670)1, (cid:1840)(cid:4671)(cid:4677) (cid:1482) (cid:1873) (cid:1488) (cid:4670)1, (cid:1839) (cid:3038) (cid:4671), (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) (3) (cid:1838) (cid:3038) (cid:3404) (cid:4668)(cid:1827) (cid:3038)(cid:3048) (cid:1527) (cid:1873) (cid:1488) (cid:4670)1, (cid:1839) (cid:3038) (cid:4671)(cid:4669) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) (4) (cid:1828) (cid:3038)(cid:3048) (cid:3404) (cid:4676)Ω (cid:3049)(cid:3048) (cid:3286) (cid:1527) (cid:1874) (cid:1488) (cid:4670)1, (cid:1840)(cid:4671)(cid:4677) (cid:1482) (cid:1873) (cid:1488) (cid:4670)1, (cid:1839) (cid:3038) (cid:4671), (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) (5) (cid:1834) (cid:3038) (cid:3404) (cid:4668)(cid:1828) (cid:3038)(cid:3048) (cid:1527) (cid:1873) (cid:1488) (cid:4670)1, (cid:1839) (cid:3038) (cid:4671)(cid:4669) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) (6) (cid:1846) (cid:3038)(cid:3048) (cid:3404) (cid:4668)(cid:1827) (cid:3038)(cid:3048) , (cid:1828) (cid:3038)(cid:3048) (cid:4669) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671), (cid:1873) (cid:1488) (cid:4670)1, (cid:1839) (cid:3038) (cid:4671) (7) (cid:1843) (cid:3038) (cid:3404) (cid:4668)(cid:1838) (cid:3038) , (cid:1834) (cid:3038) (cid:4669) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) (8) (cid:1830) (cid:3045) (cid:3404) (cid:4668)(cid:1843) (cid:3038) (cid:1527) (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671)(cid:4669) (9) (cid:1842) (cid:4593)(cid:3049)(cid:3048) (cid:3286) (cid:3404) (cid:4668)(cid:1850) (cid:4593)(cid:3049)(cid:3048) (cid:3286) , (cid:1851) (cid:4593)(cid:3049)(cid:3048) (cid:3286) (cid:4669) (10) (cid:1827) (cid:4593)(cid:3038)(cid:3048) (cid:3404) (cid:4676)(cid:1842) (cid:4593)(cid:3049)(cid:3048) (cid:3286) (cid:1527) (cid:1874) (cid:1488) (cid:4670)1, (cid:1840)(cid:4671)(cid:4677) (cid:1482) (cid:1873) (cid:1488) (cid:3427)1, (cid:1839) (cid:4593)(cid:3038) (cid:3431), (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) (11) (cid:1838) (cid:4593)(cid:3038) (cid:3404) (cid:3419)(cid:1827) (cid:4593)(cid:3038)(cid:3048) (cid:1527) (cid:1873) (cid:1488) (cid:3427)1, (cid:1839) (cid:4593)(cid:3038) (cid:3431)(cid:3423) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) (12) (cid:1828) (cid:4593)(cid:3038)(cid:3048) (cid:3404) (cid:4676)Ω (cid:4593)(cid:3049)(cid:3048) (cid:3286) (cid:1527) (cid:1874) (cid:1488) (cid:4670)1, (cid:1840)(cid:4671)(cid:4677) (cid:1482) (cid:1873) (cid:1488) (cid:3427)1, (cid:1839) (cid:4593)(cid:3038) (cid:3431), (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) (13) (cid:1834) (cid:4593)(cid:3038) (cid:3404) (cid:3419)(cid:1828) (cid:4593)(cid:3038)(cid:3048) (cid:1527) (cid:1873) (cid:1488) (cid:3427)1, (cid:1839) (cid:4593)(cid:3038) (cid:3431)(cid:3423) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) (14) (cid:1846) (cid:4593)(cid:3038)(cid:3048) (cid:3404) (cid:4668)(cid:1827) (cid:4593)(cid:3038)(cid:3048) , (cid:1828) (cid:4593)(cid:3038)(cid:3048) (cid:4669) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671), (cid:1873) (cid:1488) (cid:3427)1, (cid:1839) (cid:4593)(cid:3038) (cid:3431) (15) (cid:1843) (cid:4593)(cid:3038) (cid:3404) (cid:3419)(cid:1838) (cid:4593)(cid:3038) , (cid:1834) (cid:4593)(cid:3038) (cid:3423) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) (16) (cid:1830) (cid:3041) (cid:3404) (cid:3419)(cid:1843) (cid:4593)(cid:3038) (cid:1527) (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671)(cid:3423) (17) (cid:1839) (cid:4593)(cid:3038) (cid:3404) (cid:1839) (cid:3038) (cid:3400) (cid:4666)(cid:1840)!(cid:4667) (18) A. Procurement of Training Data
Initially, the set (cid:1830) (cid:3045) is to be generated which is basically the raw dataset as previously explained.
TABLE I. T ABLE REPRESENTATION OF (cid:1838) (cid:2869)
SET
Sr. No. Object Locations
A1 B1 A2 B2 A3 B3
234 874 214 856 764 214 2. 045 698 102 523 154 601 3. 487 035 924 157 245 682 4. 147 256 651 654 213 746 a. Here (cid:1863) (cid:3404) 1 for (cid:1838) (cid:3038)
Table I is a tabular illustration of sample (cid:1838) (cid:2869) , since this is the first dataset, (cid:1863) (cid:3404) 1 . Here, the cell at index (cid:4666)2, (cid:1827)3(cid:4667) can be represented as (cid:1850) (cid:2871)(cid:2870) (cid:3117) ; the same can be extended to the other cells. This example dataset assumes there are only 3 objects in the area of observation. Similarly, we can have multiple training datasets.
TABLE II. T ABLE REPRESENTATION OF (cid:1838) (cid:2870)
SET
Sr. No. Object Locations
A1 B1 A2 B2 A3 B3
568 248 278 698 421 297 2. 354 014 685 032 682 413 3. 570 694 724 031 824 246 b. Here (cid:1863) (cid:3404) 2 for (cid:1838) (cid:3038)
Table II illustrates another dataset involved in training. Note that the two datasets are mutually independent and merely represent the log of locations of the multiple objects in the area of observation when an event of hostility had been previously sustained. Tables III and IV illustrate the tabular representations of the observed hostility probabilities ( (cid:1834) (cid:3038) ) of the objects in the area of interest for each of the datasets with (cid:1863) = 1 and (cid:1863) = 2 respectively.
TABLE III. T ABLE REPRESENTATION OF (cid:1834) (cid:2869)
SET
Sr. No. Object Hostility Probabilities
A1 A2 A3 c. Here (cid:1863) (cid:3404) 1 for (cid:1834) (cid:3038)
TABLE IV. T ABLE REPRESENTATION OF (cid:1834) (cid:2870)
SET
Sr. No. Object Hostility Probabilities
A1 A2 A3 d. Here (cid:1863) (cid:3404) 2 for (cid:1834) (cid:3038)
The probabilities in Table III are only “0” or “1” because the network will undergo supervised training. The cell at index (cid:4666)3, (cid:1827)1(cid:4667) in Table IV can be represented as Ω (cid:2869)(cid:2871) (cid:3118) ; the same can be extended to the other cells. The system variables defined previously are illustrated in the context of the present example in the succeeding text. • (cid:1840) (cid:3404) 3 • (cid:1837) (cid:3404) 2 • (cid:1839) (cid:2869) (cid:3404) 4 • (cid:1839) (cid:2870) (cid:3404) 5 (cid:1838) (cid:2869) = Table I • (cid:1838) (cid:2870) = Table II • (cid:1834) (cid:2869) = Dataset containing hostility probability data of all objects in the area of observation with raw dataset index “ ” (Table III). Similarly, (cid:1834) (cid:2870) is defined. • (cid:1827) (cid:2870)(cid:2871) = 3 rd row of Table II. • (cid:1828) (cid:2869)(cid:2870) = 2 nd row of Table III. • (cid:1830) (cid:3045) = Collection of all tables I-IV organized as {(Table I, Table III), (Table II, Table IV)} Similarly, the other system variables can be computed. In practical application, this data can be obtained by analysing previous events of hostility sustained. The (cid:1830) (cid:3045) set so generated cannot be used to train the neural network yet. It has to be subjected to normalization to get (cid:1830) (cid:3041) (normalized dataset) which will be used to train the neural network. B. Generation of Normalized Training Data
Normalization refers to generation of all permutations of the sets (cid:1838) (cid:3038) & (cid:1834) (cid:3038) for all (cid:1863) from to (cid:1837) . This process is important because, a hostile object need not be assigned the same object index every time it is inside the area of observation. For example, suppose the neural network is trained using (cid:1830) (cid:3045) and that in the dataset, for some (cid:1866) , (cid:1873) and (cid:1863) , Ω (cid:3041)(cid:3048) (cid:3286) (cid:3404) 1.00 .Correspondingly, the network is trained to output Ω (cid:3041)(cid:3048) (cid:3286) (cid:3404) 1.00 whenever the input is (cid:1827) (cid:3038)(cid:3048) . Here, it is evident that the object with index (cid:1866) is hostile. But suppose in the future, the same object is assigned an object index of (cid:1866)(cid:1314) ; then, the system will fail to identify successfully this hostile object as it has been trained to identify the hostile traits of object at index (cid:1866) and not at (cid:1866)(cid:1314) . Although, if the system is also trained with all the permutations of (cid:1827) (cid:3038)(cid:3048) and (cid:1828) (cid:3038)(cid:3048) as input and output respectively, the system will always identify the hostile object irrespective of the object index assigned to it. To explain the normalization process, consider a scenario with (cid:1840) (cid:3404) 2 , (cid:1837) (cid:3404) 1 , (cid:1839) (cid:2869) (cid:3404) 2 . (cid:1525) (cid:1827) (cid:2869)(cid:2869) (cid:3404) (cid:4668)(cid:1842) (cid:2869)(cid:2869) (cid:3117) , (cid:1842) (cid:2870)(cid:2869) (cid:3117) (cid:4669) (cid:1827) (cid:2869)(cid:2870) (cid:3404) (cid:4668)(cid:1842) (cid:2869)(cid:2870) (cid:3117) , (cid:1842) (cid:2870)(cid:2870) (cid:3117) (cid:4669) (cid:1828) (cid:2869)(cid:2869) (cid:3404) (cid:4668)Ω (cid:2869)(cid:2869) (cid:3117) , Ω (cid:2870)(cid:2869) (cid:3117) (cid:4669) (cid:1828) (cid:2869)(cid:2870) (cid:3404) (cid:4668)Ω (cid:2869)(cid:2870) (cid:3117) , Ω (cid:2870)(cid:2870) (cid:3117) (cid:4669) (cid:1525) Generating all permutations of (cid:1827) (cid:2869)(cid:2869) , (cid:1827) (cid:2869)(cid:2870) , (cid:1828) (cid:2869)(cid:2869) and (cid:1828) (cid:2869)(cid:2870) (cid:1827) (cid:4593)(cid:2869)(cid:2869) (cid:3404) (cid:3419)(cid:1842) (cid:2869)(cid:2869) (cid:3117) , (cid:1842) (cid:2870)(cid:2869) (cid:3117) (cid:3423) (cid:1827) (cid:4593)(cid:2869)(cid:2870) (cid:3404) (cid:4668)(cid:1842) (cid:2870)(cid:2869) (cid:3117) , (cid:1842) (cid:2869)(cid:2869) (cid:3117) (cid:4669) (cid:1827) (cid:4593)(cid:2869)(cid:2871) (cid:3404) (cid:4668)(cid:1842) (cid:2869)(cid:2870) (cid:3117) , (cid:1842) (cid:2870)(cid:2870) (cid:3117) (cid:4669) (cid:1827) (cid:4593)(cid:2869)(cid:2872) (cid:3404) (cid:4668)(cid:1842) (cid:2870)(cid:2870) (cid:3117) , (cid:1842) (cid:2869)(cid:2870) (cid:3117) (cid:4669) Similarly, generate normalized dataset for set (cid:1834) (cid:2869) (cid:1839) (cid:4593)(cid:2869) (cid:3404) (cid:1839) (cid:2869) (cid:3400) (cid:4666)(cid:1840)!(cid:4667) [From (18)] (cid:1436) (cid:1839) (cid:4593)(cid:2869) (cid:3404) 2 (cid:3400) (cid:4666)2!(cid:4667) (cid:3404) 8
For some (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671), (cid:1873) (cid:1488) (cid:3427)1, (cid:1839) (cid:4593)(cid:3038) (cid:3431) (cid:1846) (cid:4593)(cid:3038)(cid:3048) (cid:3404) (cid:4668)(cid:1827) (cid:4593)(cid:3038)(cid:3048) , (cid:1828) (cid:4593)(cid:3038)(cid:3048) (cid:4669) [From (15)] (19) Eq. 19 is the normalized training data at observation index “ (cid:1873) ” as previously stated in Section II, B. In this, (cid:1827) (cid:4593)(cid:3038)(cid:3048) is input data to the neural network and (cid:1828) (cid:4593)(cid:3038)(cid:3048) is the set of target outputs. The derivation of (cid:1830) (cid:3041) is as follows – (cid:1838) (cid:4593)(cid:3038) (cid:3404) (cid:4668)(cid:1827) (cid:4593)(cid:3038)(cid:3048) (cid:1527) (cid:1873) (cid:1488) (cid:4670)1, 8(cid:4671)(cid:4669) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, 1(cid:4671) [From (12)] (cid:1834) (cid:4593)(cid:3038) (cid:3404) (cid:4668)(cid:1828) (cid:4593)(cid:3038)(cid:3048) (cid:1527) (cid:1873) (cid:1488) (cid:4670)1, 8(cid:4671)(cid:4669) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, 1(cid:4671) [From (14)] (cid:1843) (cid:4593)(cid:3038) (cid:3404) (cid:3419)(cid:1838) (cid:4593)(cid:3038) , (cid:1834) (cid:4593)(cid:3038) (cid:3423) (cid:1482) (cid:1863) (cid:1488) (cid:4670)1, 1(cid:4671) [From (16)] (cid:1830) (cid:3041) (cid:3404) (cid:3419)(cid:1843) (cid:4593)(cid:3038) (cid:1527) (cid:1863) (cid:1488) (cid:4670)1, 1(cid:4671)(cid:3423) [From (17)] C. Neural Network Training
The structure of the neural network to be trained by (cid:1830) (cid:3041) is illustrated in Fig. 1.
Fig. 1 Structure of Neural Network: A 2-Layer Feed Forward Network with 2N input neurons and N output neurons
As shown in Fig. 1 the set (cid:1835) is defined as- (cid:1835) (cid:3404) (cid:4668)(cid:1835) (cid:3049) : (cid:1874) (cid:1488) (cid:4670)1, 2(cid:1840)(cid:4671)(cid:4669)
Hence, (cid:1835) represents the set of input nodes and for an object with index (cid:1874) , the set (cid:1868) (cid:3049) is defined as – (cid:1868) (cid:3049) (cid:3404) (cid:4668)(cid:1835) (cid:2870)(cid:3049)(cid:2879)(cid:2869) , (cid:1835) (cid:2870)(cid:3049) (cid:4669)
The set (cid:1868) (cid:3049) represents the location of object having index (cid:1874) as a set of X and Y co-ordinates. Similarly, (cid:1841) (cid:3049) is the hostility of object having index (cid:1874) . While training, for a given (cid:1843) (cid:4593)(cid:3038) , we set (cid:1868) (cid:3049) (cid:3404) (cid:1842) (cid:4593)(cid:3049)(cid:3048) (cid:3286) and (cid:1841) (cid:3049) (cid:3404) Ω (cid:4593)(cid:3049)(cid:3048) (cid:3286) and cycle the value of (cid:1873) from to (cid:1839) (cid:4593)(cid:3038) . In each iteration, the system is trained using backpropagation and gradually the certitude with which the system predicts the hostility probability of each object increases. We repeat this process for each training dataset i.e. for all (cid:1863) (cid:1488) (cid:4670)1, (cid:1837)(cid:4671) . But in each dataset, only 70% should be used for training, 20% for validation and the remaining 10% for testing purposes. The distribution of the data amongst these three groups has to be random. D. Validation of Neural Network
The process of validation is carried out so as to determine when to stop training and to avoid over-fitting. At each iteration, error is calculated from the validation data, the formula of which is given in Eq. 38. (cid:1831) (cid:3404) ∑ (cid:1516)(cid:3427)(cid:1877) (cid:3037) (cid:4666)(cid:1876), (cid:1875)(cid:4667) (cid:3398) (cid:1872) (cid:3037) (cid:3431) (cid:2870) (cid:1868)(cid:3435)(cid:1876), (cid:1872) (cid:3037) (cid:3439). (cid:1856)(cid:1876) (cid:3040)(cid:3037)(cid:2880)(cid:2869) (20) Eq. 20 is used, as the network is a feed-forward network which is trained using back-propagation [2]. The network (cid:1841) (cid:2869) (cid:1841) (cid:3015)
HIDDEN LAYERS (cid:1835) (cid:2869) (cid:1835) (cid:2870) (cid:1835) (cid:2870)(cid:3015)(cid:2879)(cid:2869) (cid:1835) (cid:2870)(cid:3015) ere, is a set of functional mappings (cid:1877) (cid:3037) (cid:4666)(cid:1876), (cid:1875)(cid:4667) [3], which relate an input (cid:1876) with a given set of bias weights (cid:1875) whose values are obtained through minimizing (cid:1831) . Here, the joint probability density functions for the training data are given by (cid:1868)(cid:3435)(cid:1876), (cid:1872) (cid:3037) (cid:3439) where (cid:1862) (cid:3404) 1,2, . . . , (cid:1840) corresponds to each of the output neurons, (cid:1877) (cid:3037) is the output of neuron (cid:1862) and (cid:1872) (cid:3037) is the target output for that neuron. Initially the error decreases and the gradient of the error decrease rate changes until approaching zero. Training stops when generalization stops improving network performance as measured from the validation data. E. Deployment of Neural Network
The flowchart to illustrate the system working is given in Fig. 2.
Fig. 2 Process Flowchart for Hostility Prediction System
A given neural network can cater to only a fixed number of objects in the area of observation. Therefore, multiple neural networks having different number of inputs must be trained and generated before pragmatic deployment. For example, first, a neural network (having 8 inputs) maybe generated to analyze situations when only 4 objects are present in the area of observation; now, if the number of objects changes to 5, another neural network having 10 inputs will be needed. In node 001, the corresponding network is chosen. Subsequently in node 002, hostility probabilities of all the objects are calculated; now if an object with alarming hostility is identified (003), defensive measures should be taken to avoid any casualty. Finally, if the system fails to warn of an impending hostile event, then the network will retrain itself and learn from the experience after the hostile situation has subsided (006); much like the way humans learn from experiences. Therefore, a similar attack could be prevented in the future. This is indeed a drawback and is caused due to incomplete training. This is the reason why the network is trained with an expansive dataset of known hostile situations before deployment. IV.
SIMULATION & R ESULTS
The proposed system has been simulated and implemented using MATLAB and C ® Framework. The neural network was generated in MATLAB. The simulation involves a front-end C
Fig. 3 Confusion Matrix Pertaining to Testing Phase of Neural Network
Fig. 4 Training Performance Measure of Neural Network
Start Check number of objects in area of observation and assign neural network Predict hostility probabilities of objects Object with high hostility identified? Highlight hostile object / Initiate defensive measures Attack made by any object? Retrain/Generate network using current location data of all objects. YES YES NONO -001 -002 -003 -004 -005 -006 End Power Down ig. 5 Error Histogram of Train Fig. 6 System Simulation
For simulation purposes, a scenario with defined as (cid:1840) (cid:3404) 5, (cid:1837) (cid:3404) 1, (cid:1839) (cid:2869) (cid:3404) 494 was co4 & 5 show characteristics of the neural nettraining and testing. Shown in Fig. 3 is the cwhich maps the outputs of the neural netwoactual output over test data (10% of the datasethe diagonal, it is evident that the neural netwpredicted the hostilities of all objects for aillustrates how the mean squared error decreases as training proceeds. We can derivethe network starts to produce proper results iterations. Fig. 5 basically shows the error rangof the network outputs lie, over the input datof the outputs lie in the error range of aroundis negligible, we can say that the network probabilities with sufficient certitude. Fig. 6 ithe actual simulation. Here, each dot repreinside the area of observation. The dot enclocontrollable by the user. To the extreme rlandmass which is to be protected. The enc ning system variables onsidered. Fig. 3, twork right after confusion matrix ork to that of the et). By observing work has correctly all inputs. Fig. 4 of the network e from Fig. 4 that after 16 training ge in which most ta. Here, as most d 0.000169 which outputs hostility is a screenshot of esents an object sed in the box is right, there is a circled dots have two numbers associated with themother; the one below, represents oabove, is the probability of hosti observed that the patterns of attacnetwork, were successfully highlighwhenever an attack with a simMoreover, if the user performed an similar kind of attack made by thhighlighted, which exemplifies thproposed system. The simulation pra number of scenarios with diffesimilar results were obtained. V. C ONCLUS
A solution has been proposed wfor fully automating the process oThe system takes inputs as only ththerefore, it can be directly deployesystems or other visual surveillanclocation of the target, the incorspecialized to the domain of applicapotential to yield results with greatsystem presented in this paper incoof the object so as to maintain a sTherefore, a framework has beenspecialized to encompass different detection. The framework posited in this pin fields apart from maritime surveThe system can be deployed in canalysis and detection of malicious that the system has the ability to lea by accepting data pertaining to thethe location of which, greatly application of this methodology. R
EFERENC[1]
Jeffrey V. Nickerson, Weehawken, NJ(JP), US Patent 8145591 B2, DetMovement Patterns, 2012 [2]
C.M. Bishop, “Novelty detection andProc.-Vis. Image Signal Process., Vol. [3]
Navneet P. Singh, Manisha J. NeneAnalysis of GPR Images: Using NeuInternational Conference on MicroeRenewable Energy (ICMiCR-2013), 20[4]
David Kriesel, “A Brief Introduction to[5]
Timothy J. Ross, “Fuzzy Logic with Wiley & Sons Publishing, 2009 [7]
Daniel L. Schacter, Donna Rose “Remembering the past to imagine thNature Reviews Neuroscience 8, Septe[8]
David J. Montana and Lawrence DaviNetworks Using Genetic Algorithms”, Artificial Intelligence, 1989 m placed one on top of the object index and the other ility of the object. It was ck used to train the neural hted in this implementation milar pattern was made. attack, then in the future, a he user was automatically he learning ability of the rocess was carried out over rent system variables and