Turkey Behavior Identification System with a GUI Using Deep Learning and Video Analytics
Shengtai Ju, Sneha Mahapatra, Marisa A. Erasmus, Amy R. Reibman, Fengqing Zhu
TTurkey Behavior Identification System with a GUIUsing Deep Learning and Video Analytics
Shengtai Ju , Sneha Mahapatra , Marisa A. Erasmus , Amy R. Reibman , and Fengqing Zhu School of Electrical and Computer Engineering, Department of Animal Sciences,Purdue University, West Lafayette, Indiana, USA
Abstract
In this paper, we propose a video analytics system to iden-tify the behavior of turkeys. Turkey behavior provides evidence toassess turkey welfare, which can be negatively impacted by un-comfortable ambient temperature and various diseases. In par-ticular, healthy and sick turkeys behave differently in terms ofthe duration and frequency of activities such as eating, drinking,preening, and aggressive interactions. Our system incorporatesrecent advances in object detection and tracking to automate theprocess of identifying and analyzing turkey behavior captured bycommercial grade cameras. We combine deep-learning and tra-ditional image processing methods to address challenges in thispractical agricultural problem. Our system also includes a web-based user interface to create visualization of automated anal-ysis results. Together, we provide an improved tool for turkeyresearchers to assess turkey welfare without the time-consumingand labor-intensive manual inspection.
Introduction
Turkey is an important source of poultry meat in the UnitedStates and worldwide. Stress and diseases can pose threats toturkey welfare. According to [1], stress has direct and negativeconsequences for turkey welfare. Turkey welfare is critical toturkey production and needs to be monitored closely for optimalproduction. Videos of turkeys can be used to assess turkey wel-fare by analyzing their behavior. Currently, researchers and oth-ers studying animal behavior need to spend a lot of time watch-ing video recordings of turkeys in order to identify their behav-ior, which is both time-consuming and labor-intensive. There isan urgent need to develop an automated system to assist turkeyresearchers to conduct turkey welfare research. Such a systemshould be able to track turkeys accurately and detect changes intheir behavior, and provide data visualization and analysis foruser-friendly interaction.To determine whether a turkey is sick or not, we need toidentify different types of behavior. Behaviors of interest include:walking, stationary/sitting still, eating, drinking, preening, beakpecking, and feather pecking. Healthy and sick turkeys will en-gage in these behaviors with different frequencies and duration.For behaviors such as walking and being stationary, we can es-timate each individual turkey’s distance traveled within a fixedtime frame, which requires the knowledge of the turkey’s loca-tion in each video frame from a turkey tracker. For behaviorslike eating and drinking, we need to locate each turkey and deter-mine whether it is eating or drinking based on its proximity to thefeeder or drinker. When computing the proximity to the feeder or drinker, both the turkey’s location and its corresponding head lo-cation are required to make an accurate estimation of its behavior.Therefore, in addition to the turkey tracker, a turkey head trackeris also needed.Existing studies that focus on monitoring animal behaviorand health often require the use of wearable devices or are solelybased on the researcher’s observation, which can be intrusive tothe turkeys and time-consuming to the humans. For example,Stevenson et al. [2] use accelerometers to monitor changes inturkey behavior. Results show that habituation to wearing ac-celerometers on turkey legs greatly impact the validity and reli-ability of data. In another study, turkey behavioral responses arerecorded by human observers [3].To better analyze turkey behavior and reduce labor, we pro-pose an automated turkey tracking system with four major com-ponents: a turkey tracker, a turkey head tracker, a turkey behav-ior identification module, and a graphical user interface (GUI), asshown in Figure 1. The turkey tracker is used to track the loca-tion of each turkey in every frame, indicated by a bounding boxaround each turkey during the tracking process. The turkey headtracker is used to determine the location of each turkey’s headin every frame and is also indicated by a bounding box aroundeach turkey’s head. By combining information about turkey loca-tion and turkey head location, the behavior identification moduledetects different behaviors of interest and the duration and fre-quency of each behavior. Our system also includes a GUI to assistthe turkey researchers with visualization of the video analysis re-sults. Our GUI can generate different interactive plots that enableusers to focus on individual turkey behavior during custom timeintervals as well as aggregated behavior for the entire video. Ourmain contributions include: 1) applying deep learning and videoanalytics to create a practical turkey behavior identification sys-tem, 2) creating an interactive web-based GUI to visualize auto-mated video analysis results of turkey behavior, and 3) developinga turkey head tracker based on color histograms.Figure 1: Overview of proposed system. a r X i v : . [ ee ss . I V ] F e b elated Work In our earlier work [4], we implemented a turkey trackerbased on the CSRDCF [5] tracker, which uses correlation filtersto track objects. Recently, deep-learning based object trackershave shown promising results for many practical problems. Inparticular, DeepSort [6] uses a deep-learning based object detec-tor combined with the Kalman filter [7] to carry out multi-objecttracking, which is the task of tracking a set of objects in a se-quence of frames [8]. The object detector generates detectionresults as bounding boxes, which are used to initialize the track-ers. The Kalman filter predicts the next state of each tracked ob-ject. The tracker then associates newly detected bounding boxeswith predicted locations from the Kalman filter using the Maha-lanobis distance metric, which is effective for computing the dis-tance between a point and a distribution. Another component ofthe DeepSort tracker is an appearance feature extractor. The ap-pearance feature extractor computes the appearance features foreach detected bounding box and associates the detection to oneof the existing tracks by computing the cosine distance betweentwo distributions. The overall distance metric is a weighted sumof the Mahalanobis and cosine distances. In our application, wechoose the DeepSort tracker because it is efficient and more suit-able for video with many frames. For object detection, YOLOv3[9] is a widely used object detector and has shown promising per-formance in many applications. Other object detectors such asthe R-CNN [10], and the Fast R-CNN [11] generate different re-gions and assign class labels to each region. They can be slowbecause each region needs to be classified. YOLO looks at theentire image and detects objects within the image using a singleneural network. Therefore, it is faster and more efficient.Several object tracking systems have been designed to trackanimals. ToxTrac [12] and BioTracker [13] are two recent animaltrackers with GUI. ToxTrac [12] is an open-source object track-ing software for animals such as fish and insects. It includes sev-eral different algorithms for tracking, such as thresholding, back-ground subtraction, and Kalman filtering. The GUI in ToxTraccan show individual tracks as well as some statistical information.BioTracker [13] is similar to ToxTrac in terms of software func-tionalities. It includes tracking algorithms such as the backgroundsubtraction tracker, and Lukas-Kanade optical flow tracker. TheGUI is used to load video files and run the tracker. It can also beused to visualize tracking results and statistics of tracks. These ex-isting systems do not provide the components that are needed forour application including the tracking of turkeys and their heads,identifying individual turkey behavior and built-in interactivity ofthe GUI.
Methods
System Overview
Our method consists of a turkey tracker, a turkey headtracker, a turkey behavior identification module, and a GUI asdescribed in Figure 1. The input to our system is a video clip.For each frame, the DeepSort tracker predicts the location of eachturkey by drawing a bounding box around it. The tracker also as-signs a unique ID to each turkey so that an individual track can beestablished. The head tracker uses color information in differentcolor spaces to find the heads of all tracked turkeys. Dense opticalflow is also applied to two consecutive frames to detect motion ata fine level. Based on information from the aforementioned com- ponents, the behavior identification module identifies the type ofbehavior associated with a specific turkey. Results from the track-ers and behavior identification are imported into the GUI alongwith the video clips to generate different statistical plots. Figure 2shows an example frame of the turkey video. As can be seen fromthe figure, turkeys look highly similar and it can be challenging toidentify the heads because of background color. In the followingsections, we describe each component of our system: 1) turkeytracker, 2) head tracker, 3) behavior identification module, and 4)GUI. Figure 2: Experimental room with seven turkeys
Video Analytics
Turkey Tracker
The DeepSort tracker [6] is used to predict the location ofeach turkey in every frame. The tracker relies on the YOLOv3detector [9], which is fully trained on our custom turkey dataset.Details about the training data is presented in the dataset and eval-uation metrics section. A turkey track contains the spatial loca-tion and bounding box information of each turkey in a frame. Itcan be used to generate the temporal path of turkeys in the video,whereas a turkey tracker is used to describe the overall trackingalgorithm. For a turkey track to be initialized and established, de-tection of that turkey has to be successful for a fixed number offrames. After a track is established, the tracker uses a Kalmanfilter to predict the next state of the track. The predicted stateof an existing track from the Kalman filtering is matched withthe new detection results by computing the Mahalanobis distance.This association accounts for short-term motion of the object [6].Appearance information of an object also needs to be consideredbecause after long-term occlusion, a turkey should be re-identifiedby the tracker with the same ID that was originally assigned to thatturkey. Distance between the newly detected turkeys and previ-ous turkey instances is computed by cosine distance in the featurespace. A CNN model pre-trained on a person re-identificationdataset [14] is used to extract features. Even though this modelwas trained on data for re-identifying humans, it still performswell for extracting turkey features. By incorporating both motionand appearance information, the DeepSort tracker provides accu-rate predictions of where the turkeys are in each frame. Comparedto the original implementation as described in [6], we modified afew parameters to achieve the best results for our application. Wechanged the parameters inside the Kalman filter so that it onlytracks the position variables without considering the aspect ratiosof the bounding boxes. We also increased the number of succes-sive detections needed to create a new track. These changes weremade because turkeys move differently compared to humans, andshape deformations such as wing flapping and fast walking cancause false positive detection. urkey Head Tracker
To accurately find the heads of the turkeys, we focus on thecolor information of the heads. As shown in Figure 2, it can bechallenging to distinguish the turkeys’ heads from the backgrounddue to color similarity and small size. To make the heads moredistinguishable from the background, we apply contrast enhance-ment to the frames as a pre-processing step. By converting eachframe from RGB to HSV color space and by weighing the S chan-nel more, we increase the contrast of every frame processed byour head tracker. Since the color of turkey heads vary in differentvideos due to lighting conditions, we use manual initialization ofthe turkey heads location to achieve best results. We manuallyinitialize turkey heads by drawing bounding boxes around themat the beginning of the video once the turkey tracks have been es-tablished. In subsequent frames, we look for patches around allturkey heads that best match the one initialized in the first frame.We also incorporate target updating, lost target detection, and losttarget recovery throughout the entire video.The manual initialization of each head is a 25 ×
25 pixelspatch. For each target patch, we compute its color histograms intwo different color spaces: HSV, and CIELAB. Histograms arecomputed for each individual color channel, for a total of six his-tograms from both color spaces. During the initialization stage,the location of each head target is stored. In subsequent frames,the head tracker looks for the head in a tight search region con-strained by the initialization. In order to account for motion ofthe head, we expand the search region of the head to be 50 × ×
25 regions,we compute the six color histograms for each patch and computethe cosine distance between each patch and the original target. Aweighted distance measure is used to compute the final distancemetric, as shown in Equations 1 and 2. The sum of these twodistances is used as our final distance measure. The overlap ofdifferent patches is set to be 5 pixels.distHSV = (cid:2) distH distS distV (cid:3)(cid:2) / /
10 1 / (cid:3) T (1)distLab = (cid:2) distL dista distb (cid:3)(cid:2) / / / (cid:3) T (2)Since turkeys are free to move around the room, there willbe slight changes to the color of turkey heads because of differ-ent lighting in different areas of the room. Therefore, we need toupdate the turkey-head targets regularly to make our head track-ing more accurate. Turkey-head targets are updated every 3 sec-onds in our experiments. Since error can accumulate throughoutthe tracking process and target updating, there is a need to detectwhen a turkey head is lost. Here, error comes from the identi-fied best matches containing many background pixels and driftingaway from the actual head center. The cosine distance betweenthe current head patch being tracked and the target patch is usedto determine if a head is lost. Prior knowledge on turkey bodyshape is also used when determining whether a head is lost or not,since we know that a turkey’s head cannot be too far away fromits body. The threshold for identifying a lost head is determinedad hoc based on the characteristics of the video. Behavior Identification system
This section presents the behavior identification module,which relies on the estimated location of turkeys and their heads. We focus on detecting the following three behaviors: walking,eating, and drinking. Walking can be detected by calculating thedistance traveled by each turkey within a fixed amount of time.If the distance traveled by a turkey exceeds a fixed threshold, wecan label this turkey as walking. For detecting eating and drink-ing, we need to mark the locations of the feeder and drinker first.By using the turkey locations and turkey head locations, we canmeasure each turkey’s proximity to the feeder and drinker. Wethen make a decision based on whether the turkey is closer to thefeeder or drinker. Optical flow is used to detect more complicatedbehavior such as preening and aggressive interactions. The out-put of the behavior identification module is the duration and fre-quency of each behavior. The minimum separation between twoinstances of the same behavior is three seconds. For example, ifa turkey pauses walking for less than three seconds and continuesto walk, it is still considered to be walking during the pause.
Web-Based GUI
We designed a web-based GUI to allow easy and efficientvisualization and interaction with video analytics output. We usedfour different formats to present the data: graphical interpretation,video interpretation, animation and analysis, and tabular data. Theweb-based GUI interface is written in Python, and uses the Dashand Plotly [15] libraries.
Graphical Interpretation
The graphical interpretation tab allows the user to view thedata with different types of graphs in two main categories: com-parison graphs and statistical graphs. The main purpose of thecomparison graphs is to allow the user to compare results of dif-ferent turkeys. For example, the activity level of turkeys can beestimated based on distance traveled by each turkey in the dis-tance vs. time plot. The spatial location graph can be used to vi-sualize where inside the room each turkey spends most of its time,to help understand its behavior. We include graphs to show bothindividual and aggregate analysis since changes in behavior canbe reflected both individually and as a group. Statistical graphsare designed to provide basic statistical analysis of the turkeys’movement. Examples include histogram distance (PDF) plot, cu-mulative distribution function (CDF) plot, violin plot [16], and2D kernel density estimation plot. There are also many interactivefeatures allowing the user to connect the graphical data with theassociated video data. Features include selecting which turkeysto view, comparing user-selected turkeys, clicking on a data pointto view the location in the video where the data point occurred,zooming into certain sections of the graph, and selecting certainsections of the graph.
Video Interpretation
In the Video Interpretation tab, the user can look at the videoof the turkeys being tracked in much more detail such as changingspeed, updating intervals to focus on, and seeking to certain partsof the video (first quarter, half, third quarter, and end).
Animation and Analysis
In the Animation and Analysis tab, the user can view themovements of a single turkey as a point object. This allows theuser to see how the turkeys move every second. Figure 3 showsan example of the animation and analysis plot.igure 3: An example of the animation and analysis plot for oneturkey. A play and stop button is provided to visualize the turkeymovement as each point represents the turkey position in a frame.
Tabular Data
In the Tabular Data tab, the user can view the data pro-vided by the trackers. The Tabular data provides a 5-column listwhere each row represents a turkey every second, and the fivecolumns represent the turkey’s ID, x-coordinate of the centroid, y-coordinate of the centroid, width of the bounding box, and heightof the bounding box. Users can highlight certain boxes, or sortthe data numerically in either ascending or descending order.
Experiments and Results
Dataset and evaluation metrics
Our dataset contains videos of multiple turkeys in smallpens. All turkeys are white commercial turkeys raised to simi-lar weeks of age so that they have similar color and sizes. Videosare captured using commercial grade cameras at HD resolution(1280 × m t , f p t ,and mme t are the number of misses, the number of false positives,and the number of mismatches at time t , respectively. g t is thenumber of objects present at time t . d it is the distance betweena matched object in the ground truth with its corresponding hy-pothesis, at time t . c t is the number of matches, at time t . SinceMOTA subtracts three error ratios from the constant one, it canactually take on negative values. MOTP measures the distance er-ror for matched objects and their hypotheses. MOTA is reportedas percentage and MOTP is reported as pixels per match. Notethat MOTA increases with improved performance while MOTPdecreases. MOTA = − ∑ t ( m t + f p t + mme t ) ∑ t g t (3) MOT P = ∑ i , t d it ∑ t c t (4)For evaluating our behavior identification module, we adoptcommonly used activity recognition metrics mentioned in [18].Specifically, our behavior detection is evaluated based on events,not frames. We evaluate the eating, drinking, and walking behav-ior in terms of precision, recall, number of insertions, and numberof deletions. Precision is the ratio between true positives and totalnumber of detection. Recall is the ratio between true positives andtotal number of objects present. An insertion ( I ) is defined as a de-tected event with no corresponding event in the ground truth. Adeletion ( D ) is defined as failure to detect an event. We also com-pute the intersection over union (IOU) ratio between the detectedbehavior time intervals and the ground truth time intervals. IOUis computed as a ratio, with the intersection between two time in-tervals in the numerator and the union of two time intervals in thedenominator. Results and Discussion
Turkey tracker
As can be seen from Table 1, our DeepSort tailored to track-ing turkeys performs better than our previous turkey tracker [4],under all evaluation clips for both short-term and long-term track-ing. The clip ID column contains information about the cliplength and the number of turkeys inside the experimental room.For example, represents a 3-minute video with five turkeysinside the room. Our previous turkey tracker uses the CSRDCFtracker[5] and YOLOv3[9] for detection. The increase in MOTAis dramatic for our newly proposed tracker. The pixel error formatched turkeys is less than 20 pixels for all evaluation clips.
Turkey head tracker
For the turkey head tracker, we also compare our methodagainst our previous tracker [4]. Table 1 shows that our headtracker outperforms our previous tracker in all evaluation clips.MOTA values are greatly increased while MOTP values are re-duced. Some MOTA values for [4] are negative because the num-ber of misses and false positives are greater than the total numberof ground truth objects. Negative MOTA values indicate that the lip ID Method Turkey Tracker Head TrackerMOTA ↑ MOTP ↓ MOTA ↑ MOTP ↓ m t [4] 47.6% 21.22 -51.0% 24.92Ours m t [4] 10.7% 21.62 -50.4% 29.47Ours m t [4] 57.3% 23.45 14.9% 22.46Ours m t [4] 54.8% 23.56 -3.8% 19.49Ours m t [4] 39.4% 23.34 -47.6% 29.87Ours m t [4] 12.3% 20.47 -15.3% 27.26Ours m t [4] 40.3% 21.44 26.6% 23.89Ours m t [4] 5.6% 28.50 16.7% 21.24Ours Table 1: Turkey tracking and turkey heads tracking results (boldnumbers are best results for each clip and each task.)
Precision Recall I D IOUWalking 60% 60% 4 2 0.6015Eating 73% 85% 3 1 0.5319Drinking 20% 100% 4 0 0.6923
Table 2: Turkey behavior identification resultstracker fails to track meaningful objects for a large fraction of theevaluation clip.
Behavior identification module
Given an ethogram that defines different turkey behavior, welabeled ground truth behavior data for walking, eating, and drink-ing in terms of time intervals for one of our evaluation clips with7 turkeys. The ground truth contains data for 4 out of 7 turkeysinside the room. However, our system fails to detect one of theturkeys consistently and only outputs the location information fora small fraction of the frames. We omit this turkey for behav-ior evaluation since we cannot reliably estimate its locations for aconsecutive sequence of frames. Our goal is to successfully de-tect as many actions as possible and not miss any action. There-fore, recall is more important than precision for our application.Among the 3 turkeys, the ground truth contains a total of 10 occur-rences of walking, 13 occurrences of eating, and 1 occurrence ofdrinking. As shown in Table 2, our behavior identification moduleachieves over 60% in recall and over 53% IOU in time intervalsfor correct detection.
GUI
A few example plots are shown here to demonstrate the func-tions of the GUI. Figures 6, 7, and 5 are examples of the compar-ison plots. Figure 4 is an example of the statistical plots. Figure 6is a plot of the turkey trajectory, color coded by time intervals andturkey ID. Figure 7 shows that the user can use our GUI to select acustom time window and focus on an individual turkey to visual-ize results. Figure 5 is an example of turkey locations plotted as a2D density plot. The darker color means that the turkey is spend-ing more time at that location inside the room. Figure 4 showsthe violin plot for distance traveled by each turkey. The violin plot is similar to a box plot but instead it also represents the ro-tated kernel density plot (showcasing concentrated places of datapoints). These plots enable researchers to engage in both individ-ual and aggregate behavior analysis and provide researchers withabundant interactivity with the tracking results.Figure 4: Violin Plot for Turkey 1 with characteristics for TurkeyFigure 5: 2D Density Plot of Turkey Locations.Figure 6: Turkey Trajectory Plots Color Coded by Time.Figure 7: Individual Turkey Spatial Distance with an InteractiveTime Window.
Conclusion and Future Work
In this paper, we introduce a video analysis system thattracks turkeys and turkey heads, identifies turkey behavior, andprovides an interactive GUI to help visualize the results. Wedemonstrate good tracking performance on several evaluationvideo clips both for our DeepSort-based [6] turkey tracker andour color histogram based turkey head tracker. Our turkey trackernd head tracker perform significantly better than our previousmethod [4]. Results from our behavior identification module alsoshow that we can accurately detect each behavior and capture theduration of the activity. Our interactive GUI provides differentforms of data visualization to interpret the tracking results. Forfuture work, we will evaluate our system on more video data totest its robustness under different conditions. Also, according to[19], injurious pecking, such as head pecking and feather pecking,are major behavioral issues for commercial turkeys. We plan toexpand the behavior identification module to incorporate a morecomprehensive collection of behaviors, especially the aggressiveinteractions between turkeys, so that abnormal turkey behaviorcan be identified and treatment can be applied in a timely manner.
References [1] G. Huff, W. Huff, N. Rath, A. Donoghue, N. Anthony, andK. Nestor, “Differential effects of sex and genetics on behavior andstress response of turkeys,”
Poultry Science , vol. 86, pp. 1294–303,08 2007.[2] R. Stevenson, H. A. Dalton, and M. Erasmus, “Validity of micro-data loggers to determine walking activity of turkeys and effects onturkey gait,”
Frontiers in Veterinary Science , vol. 5, p. 319, 2019.[3] M. Erasmus and J. Swanson, “Temperamental turkeys: Reliabilityof behavioural responses to four tests of fear,”
Applied Animal Be-haviour Science , vol. 157, pp. 100–108, 2014.[4] S. Ju, M. A. Erasmus, A. R. Reibman, and F. Zhu, “Video track-ing to monitor turkey welfare,”
Proceedings of the IEEE SouthwestSymposium on Image Analysis and Interpretation , 2020.[5] A. Lukeˇziˇc, T. Vojir, L. ˇCehovin Zajc, J. Matas, and M. Kristan,“Discriminative correlation filter with channel and spatial reliabil-ity,”
Proceedings of the IEEE Conference on Computer Vision andPattern Recognition , 2017.[6] N. Wojke, A. Bewley, and D. Paulus, “Simple online and real-time tracking with a deep association metric,”
Proceedings of theIEEE International Conference on Image Processing , pp. 3645–3649, 2017.[7] R. E. Kalman, “A new approach to linear filtering and predictionproblems,”
Transactions of the ASME–Journal of Basic Engineer-ing , vol. 82, no. Series D, pp. 35–45, 1960.[8] M. Fiaz, A. Mahmood, and S. K. Jung, “Tracking noisy targets:A review of recent object tracking approaches,” arXiv preprintarXiv:1802.03098 , 2018.[9] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767 , 2018.[10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hi-erarchies for accurate object detection and semantic segmentation,”
Proceedings of the IEEE Conference on Computer Vision and Pat-tern Recognition , 2014.[11] R. Girshick, “Fast R-CNN,”
Proceedings of the IEEE InternationalConference on Computer Vision , 2015.[12] A. Rodriguez, H. Zhang, J. Klaminder, T. Brodin, P. L. Andersson,and M. Andersson, “Toxtrac: a fast and robust software for trackingorganisms,”
Methods in Ecology and Evolution , vol. 9, no. 3, pp.460–464, 2018.[13] H. J. M¨onck, A. J¨org, T. von Falkenhausen, J. Tanke, B. Wild,D. Dormagen, J. Piotrowski, C. Winklmayr, D. Bierbach, andT. Landgraf, “Biotracker: An open-source computer vision frame-work for visual animal tracking,” arXiv preprint arXiv:1803.07985 ,2018. [14] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian,“Mars: A video benchmark for large-scale person re-identification,”
Proceedings of the European Conference on Computer Vision , 2016.[15] S. Hossain, “Visualization of bioinformatics data with dash bio,”
Python in Science Conference , pp. 126–133, 01 2019.[16] J. L. Hintze and R. D. Nelson, “Violin plots: A box plot-densitytrace synergism,”
The American Statistician , vol. 52, no. 2, pp. 181–184, 1998.[17] K. Bernardin and R. Stiefelhagen, “Evaluating multiple object track-ing performance: the clear mot metrics,”
EURASIP Journal on Im-age and Video Processing , vol. 2008, pp. 1–10, 2008.[18] J. A. Ward, P. Lukowicz, and H. W. Gellersen, “Performance metricsfor activity recognition,”
ACM Transactions on Intelligent Systemsand Technology , vol. 2, no. 1, pp. 1–23, 2011.[19] M. A. Erasmus, “Welfare issues in turkey production,” in
Advancesin Poultry Welfare , ser. Woodhead Publishing Series in Food Sci-ence, Technology and Nutrition, J. A. Mench, Ed. Woodhead Pub-lishing, 2018, pp. 263–291.