Drive Safe: Cognitive-Behavioral Mining for Intelligent Transportation Cyber-Physical System
Md. Shirajum Munir, Sarder Fakhrul Abedin, Ki Tae Kim, Do Hyeon Kim, Md. Golam Rabiul Alam, Choong Seon Hong
11 Drive Safe: Cognitive-Behavioral Mining forIntelligent Transportation Cyber-Physical System
Md. Shirajum Munir,
Student Member, IEEE,
Sarder Fakhrul Abedin,
Student Member, IEEE,
Ki Tae Kim,Do Hyeon Kim, Md. Golam Rabiul Alam,
Member, IEEE, and Choong Seon Hong,
Senior Member, IEEE
Abstract —This paper presents a cognitive behavioral-baseddriver mood repairment platform in intelligent transportationcyber-physical systems (IT-CPS) for road safety. In particular,we propose a driving safety platform for distracted drivers,namely drive safe , in IT-CPS. The proposed platform recognizesthe distracting activities of the drivers as well as their emotionsfor mood repair. Further, we develop a prototype of the proposeddrive safe platform to establish proof-of-concept (PoC) for theroad safety in IT-CPS. In the developed driving safety platform,we employ five AI and statistical-based models to infer a vehicledriver’s cognitive-behavioral mining to ensure safe driving duringthe drive. Especially, capsule network (CN), maximum likelihood(ML), convolutional neural network (CNN), Apriori algorithm,and Bayesian network (BN) are deployed for driver activityrecognition, environmental feature extraction, mood recognition,sequential pattern mining, and content recommendation foraffective mood repairment of the driver, respectively. Besides, wedevelop a communication module to interact with the systemsin IT-CPS asynchronously. Thus, the developed drive safe PoCcan guide the vehicle drivers when they are distracted fromdriving due to the cognitive-behavioral factors. Finally, we haveperformed a qualitative evaluation to measure the usability andeffectiveness of the developed drive safe platform. We observethat the P-value is . (i.e., < . ) in the ANOVA test.Moreover, the confidence interval analysis also shows significantgains in prevalence value which is around . for a confidence level. The aforementioned statistical results indicatehigh reliability in terms of driver’s safety and mental state. Index Terms —Driver emotion recognition, Driver safety andtransportation system, Mood repairment, Intelligent drivingsafety platform, 6G cyber physical system.
I. I
NTRODUCTION A CCORDING to the global status report on road safetyby the World Health Organization (WHO), around 1.35million people die each year due to road traffic crashes [1].Therefore, the 2030 Agenda for
Sustainable Development hasset an unprecedented goal of reducing the global number ofinjuries from road traffic crashes by half within 2020. In fact,among several key factors for road traffic crashes, the issue
Md. Shirajum Munir, Ki Tae Kim, Do Hyeon Kim, and Choong Seon Hongare with the Department of Computer Science and Engineering, Kyung HeeUniversity, Yongin-si 17104, Republic of Korea (e-mail:[email protected];[email protected]; [email protected]; [email protected]).Sarder Fakhrul Abedin is with the Department of Information Systems andTechnology, Mid Sweden University, Sundsvall 851 70, Sweden, and also withthe Department of Computer Science and Engineering, Kyung Hee University,Yongin-si 17104, Republic of Korea,(e-mail:[email protected]).Md. Golam Rabiul Alam is with the Department of Computer Scienceand Engineering, BRAC University, Dhaka, Bangladesh, and also with theDepartment of Computer Science and Engineering, Kyung Hee University,Yongin-si 17104, Republic of Korea,(e-mail:[email protected]).Corresponding author: Choong Seon Hong (e-mail: [email protected]) of a distracted driving behavior [2] is more prevalent withthe growing concern over driver activity and psychologicalstate. For example, the WHO global status report on roadsafety indicates the use of the mobile phone during driving isapproximately four times more likely to be involved in a crashthan drivers not using a mobile phone. Meanwhile, the useof Hands-free phones and texting while driving also impedeensuring the drivers’ safety [3]. Besides, the psychologicalstates of the drivers affect the incidence of road collisionssignificantly [4]. As a result, the traditional transportationsystem has gone through rapid development which leads tothe extensive research on intelligent transportation system [5].The cyber-physical systems (CPS) are the physical and engi-neered systems that enable the integration of the cyber-worldof computing and communications with the physical world[6]. As a result, the concept of intelligent transportation cyber-physical systems (IT-CPS) [7] has emerged from the traditionalCPS, where the objective is to bridge the gap between the traf-fic information system and the physical transportation systemin-terms of intelligent and efficient monitoring, coordinationand control [8]. More specifically, the perceived physicaltransportation environment facilitates a deep understanding ofthe urban traffic for improving the current traffic conditionsthrough the newly emerged computing, communication, andcontrol technologies.Providing a safe driving environment in IT-CPS, it isimperative to make a bridge between cognitive-behavioralmining of the vehicle driver and emerged with the computing,communication, and control technologies of next-generationcyber-physical systems. Therefore, this study aims to identifythe psychological factors so that drive safe platform can guidethe vehicle driver when they are distracted from driving dueto the cognitive-behavioral factors. In this regard, we proposean autonomous interaction between a vehicle driver’s mentalhealth and affective mood repairment during driving. To thebest for our knowledge, researches are conducted individuallyfor the vehicle driver’s mental health recognition, and dis-tracted behavior detection. In fact, affective mood repairmentfor the vehicle drivers is entirely unexplored.The main contribution of this paper is a novel drive safeplatform that focuses on cognitive-behavioral mining of vehi-cle driver during the driving, promising to ensure the driversafety in intelligent transportation cyber-physical systems. Ourkey contributions include: • First, we propose the drive safe platform that is com-prised of two established domains, cognitive engineering, a r X i v : . [ c s . H C ] A ug Talking on the phoneDrinking Safe DrivingReaching behind Texting Operating radio
Fig. 1: Example of driver activities during the drive [5].and communication network for intelligent transportationcyber-physical system. • Second, we propose several artificial intelligence andstatistical methodologies for the driver activity recog-nition, mood mining, cognitive-behavioral mining, andaffective mood repairment of the vehicle driver. To repairaffective mood, the system autonomously recommends acontent (i.e., audio) based on the driver’s current mentalhealth. Therefore, the proposed models fulfill the visionof computing, communication, and control technologiesby utilizing multi-access edge computing, on-devicescomputing, and autonomous decision making through thecommunication network. • Third, we develop a prototype of the drive safe platformfor the intelligent transportation cyber-physical systemthat reduces the gap between the vision of the researchercommunity and industry. The developed prototype estab-lishes the proof of concept (PoC) of the proposed drivesafe platform. • Finally, a qualitative evaluation has performed to evaluatethe usability and effectiveness of the developed drivesafe platform. The statistical analysis has established theeffectiveness of the developed drive safe platform. Inparticular, an average outcome is achieved more than for an upper confidence level (CL) of confidenceinterval analysis from the user studies. To this end, weexamine the technical challenges and driven technolo-gies for the drive safe platform toward road safety. Wehave found that the proposed cognitive-behavioral basedroad safety platform ensures the usability of the cutting-edge technologies for the future intelligent transportationcyber-physical systems.The rest of the paper is organized as follows. Section IIpresents related works of the drive safe platform in IT-CPS.The proposed drive safe platform in IT-CPS is describedin Section III. We provide testbed implementation of thecognitive-behavioral mining for the drive safe platform in IT-CPS in Section IV. Discussion and key findings are given inSection V. Finally, conclusions are drawn in Section VI. A r ou s a l Valence
Fig. 2: Geneva emotion wheel [9].II. R
ELATED W ORK
The intelligent transportation system (ITS) [10]–[17] is aprominent outcome of the next-generation of communicationsystems. To ensure the driver safety [18]–[21] in the intelligenttransportation system, drive activity [22]–[26], and mentalhealth [27]–[32] play a crucial role. In this section, we discussbackground of the intelligent transportation system, some ofthe related works, and challenges, which are grouped intothree categories: (i) intelligent transportation system and driversafety, (ii) driver activity recognition, and (iii) driver moodrecognition.
A. Intelligent Transportation System and Driver Safety
The intelligent transportation system is an integrated tech-nology aimed at delivering innovative services relating tovarious modes of transportation and traffic management. Suchtechnology is allowing the consumers to intelligence andreliable transport network in a safer, and organized manner.Moreover, a bundle of emerging technologies [17] convergesthe goal of ITS. In particular, edge computing [11], [33], edgeanalytics [10], reliable communication that includes G [12],[13] and beyond [14]. In fact, the emerging application suchas autonomous and connected vehicle control [19], infotain-ment [16], autonomous road map management for emergencyvehicles [15] are established the success of ITS. However, thedriver safety in ITS is one of the essences to ensure the roadsafety as well as the vehicle driver psychological health.Recently, some of the challenges of driver safety havebeen studied in [18]–[21]. In [18], the authors developedan autonomous transportation application that estimates themental fatigue of a driver using an electroencephalogram(EEG) measurement to ensure the safe drive. This workhas been predicted a collision probability to avoid the roadaccident by fusing driver mental state with the car parameters.The authors in [19] proposed a personalized vehicle trajectoryprediction mechanism to find the leading vehicle trajectoryby analyzing the driving behaviors (i.e., aggressive, moderate,and conservative) to assures the road safety for the connected and autonomous vehicle. In [20], the authors studied servicebased enhanced collision avoidance mechanism in the cellularnetwork and proposed an edge-based framework for the roadsafety. The authors in [21] conducted a questionnaire basedsurvey of drivers and mine the behaviour of those vehicledriver based on standard statistical tests for developing aproactive road safety strategy. This statistical test includesdriver behaviour in the traffic signal, dilemma zone analysis,and driver comprehension, where result showed income level,education level, driving experience, age, gender, and frequencyare the key factors of a driver behaviour. However, all ofthese existing works [18]–[21] moderately enhance the roadsafety. Apart from that, our proposed drive safe platformprovides a complete cyber-physical system for the intelligenttransportation system that not only ensures the road safety butalso repair the affective mood of the vehicle driver during thedrive for the long term driving.
B. Driver Activity Recognition
In recent years, distracted driving is one of the major causesof the road accident. Therefore, it is imperative to detectdriver activity during the drive to achieve the goal of the roadsafety in IT-CPS. However, some of the individual researchworks [22]–[26] have been conducted to determine the driveractivity without any connection with IT-CPS. In [22], theauthors proposed an end-to-end deep learning (DL) modelto detect the driver distraction by analyzing ElectrodermalActivity (EDA) signal from the nasal and palm along withheart rate and breathing rate. The authors classified into twoclasses: 1) with distractions or 2) without distractions, andmonitor the driver distraction in real-time. In [23], the authorsdeveloped an in-vehicle driver distraction detection system bydeveloping support vector machines (SVMs) that used the eyemovements and driving log. The authors in [24] proposed amixed convolutional neural networks (CNN) and Histogramof Oriented Gradients (HOG) featured based method to detectreal-time driver distraction. In [25], the authors developeda CNN-based neural network model that can classify seventypes of driver activities into two classes. To increase thedriver comfort and safety during the drive, the authors in [26]classified the driving styles into three categories (i.e., low-risk,moderate-risk, and high-risk) from the operational images.Further, the authors analyzed the performance of the driv-ing style classification through the deep convolutional neuralnetwork, long short-term memory (LSTM), and pre-trained-LSTM. However, these works [22]–[26] do not investigate theproblem of distracted driving in IT-CPS, nor they account forfeedback or safety instruction to the drivers. In this work,we provide real-time driver distraction detection and safetyinstruction to the driver by on-vehicle computing with IT-CPS. Further, we fuse the driver’s activity with the cognitivebehaviour for the betterment of the driver mental health duringthe drive.
C. Driver Mood Recognition
To ensure the road safety in ITS, driver stress and fatigue[27] recognition is one of the fundamental challenges. How-ever, very few researches [28]–[32] focus on light to analyzing physiological data for the driver mood recognition. In [28], theauthors first proposed a stress detection mechanism for thedriver from the physiological sensor data. Additionally, theyhave presented a method to collect Electrocardiogram (ECG),Electromyogram (EMG), skin conductance, and respirationsensor date during the drive from the driver. The authorsin [29] evaluated a cognitive distraction in the simulationenvironment from the driver’s eye tracking and tested ina driving car with a km/h speed. In [30], the authorsconsidered the ECG physiological signal to characterize drivermental state with distracted driving. Further, the authors in [31]captured the EDA, Skin Temperature, and ECG signal from thedriver to differentiate the mood into four basic categories usingCNN. Additionally, the authors proposed a Dempster-Shafer-based evidence theory to obtain a more robust emotionalstate for the driver. To monitor the driver attention duringthe drive, the authors in [32] analyzed electroencephalogram(EEG) signal and identified the impact of the key brainregions. These studies [28]–[32] focused on detecting stressand strike to a limited scope that does not answer the questionof ”what is next?” . Therefore, in this paper, we provide acognitive behavioural mining platform for IT-CPS that not onlyrecognizes the driver mood but also repairs affective moodtoward the road safety.III. D RIVE S AFE P LATFORM FOR I NTELLIGENT T RANSPORTATION C YBER -P HYSICAL S YSTEM
The goal of the drive safe platform for intelligent trans-portation cyber-physical systems are to design and developa driver safety application infrastructure with considerationof the driver’s mental state while driving to prevent deadlyroad accidents, and injuries. In particular, we develop apersonalized smart transportation safety platform using theuser’s biosensors, on-vehicle, and environmental sensors dataas user lifelog. Thus, we employ cognitive-behavioral miningof the driver mood during the driving where mental healthassessment and content recommendation based on vehicleenvironment and driver’s emotion are considered for the moodrepairment of the vehicle driver.
A. Drive Safe Platform
As we can see in Fig 3, the drive safe platform consistsof several components, such components including varioussensors, techniques, and methodologies. In the drive safeplatform, we consider three sensors, such as biosensors (i.e.,physiological and non invasive), on-vehicle camera sensors,and environmental sensors. Thus, the collected biosensor datawill be fed into the emotion recognition engine for the vehicledriver’s mood mining. The emotion recognition module willuse the CNN [34] for mood classification. Further, the in-vehicle camera and environmental sensor data are fed intoa driver’s activity classification and environment monitoringengine. The real-time camera sensor data is used for classi-fying the driver activity (e.g., safe driving, texting, talking onthe phone, drinking, hair, and makeup, talking to passengers)during the driving via a driver activity classification mod-ule. Meanwhile, the driver distraction [5], [25] is classified
Emotion-Aware Recommender Safety RecommenderFusion RecommendationPattern Mining EnginePattern Parser KnowledgebaseData preparation Association Mapping through Support CountRule Generation through Confidence count Pattern selection
ECG, EMG, EEG, EDA
Environmental Sensor
Camera Sensor
Sensors data collectionDriver’s Activity ClassificationDriver’s Distraction Detection Engine Emotion Recognition EngineReal Time Safety Notification Vehicle Dashboard Content Recommendation Mood Repairment WheelEnvironment Monitoring EngineContext Fusion Real Time Feedback Context-Aware Recommendation
Fig. 3: Proposed drive safe platform for intelligent transportation cyber-physical system.into distracted or not distracted by the driver’s distractiondetection engine. Subsequently, the environment monitoringengine monitors the inside environment of a vehicle usingenvironmental sensor data. The driver’s emotional, activity,and environmental context are fused in the context fusionmodule and sent to the pattern mining engine.The pattern mining engine is deployed for association rulemining that finds the vehicle driver’s lifestyle patterns. In thiswork, we design an association rule mining using an Apriori[35] method to extract cognitive patterns from observed sensordata. The support and confidence thresholds are used to findout the significant patterns. The pattern mining module sendsa significant lifestyle pattern to the emotion-aware recom-mender. In particular, the pattern parser parses the contextof the driver’s lifestyle and sends it to the emotion awarerecommender system for analyzing the emotional pattern byutilizing previous historical observation. The emotion-awarerecommender system uses the Bayesian [36] recommendationalgorithm to recommend contents for mood repairment. Thefusion recommender fuses the driver’s emotional with physicalwell-being recommendation and send it to the knowledgebase.Thus, a context-aware recommender of the smart on-boardvehicle application enables the transportation safety notifica-tion and content to the vehicle driver. The smart on-boardapplication keeps the driver’s record on lifestyle patterns. Thatestablishes the corresponding recommendation to the vehicledriver and provides an interface to interact through the smarttransportation system. In particular, the vehicle dashboard provides feedback to the driver for the mood repairment andsafe drive.
B. Communication and computation infrastructure for thedrive safe platform
The communication and computation infrastructure (CCI)for the drive safe platform is considered as multi-access [37]–[40] technology for the G and beyond, as shown in Fig.4. In this communication model, we consider a four-layercommunication and computation infrastructure. Such infras-tructure includes on-vehicle IoT network, edge computing,intelligent system, and cloud services layer. In particular, thisCCI can provide an intelligent transportation cyber-physicalsystem by enabling on-board (i.e., on-device) computing, edgecomputing, and cloud computing.
1) On-vehicle IoT Network:
In this layer, we considerthree vertical sensor networks with on-device computationalcapabilities. In particular, the on-vehicle IoT network [41] en-compasses physiological, environmental, and inertial sensorsnetworks. a) Body Area Network:
In order to capture the physio-logical observation of a vehicle driver, we consider four typesof biosensors sensors: electrodermal activity (EDA), electro-cardiogram (ECG), electromyography (EMG) and Electroen-cephalography (EEG). These sensors capture the physiological[42] data for the vehicle driver through a sink node. Besides,the sink node acts as a physiological data aggregator. Thus,the communication between sensors and the sink node (i.e.,
Activity Recognition EngineEmotion Recognition Engine Context FusionEngine
Environmental DataPhysiological DataGSR(EDA) ECGEMGPPG(BVP) Camera Sensor Data
Raspberry PiPlatformSink Node Edge ComputingIntelligent SystemsCloud ServicesOn-vehicle IoTNetworksSmartDeviceActivity Recommendation ServiceContent Recommendation Service Wired ConnectionInter Process CommunicationWireless Connection
Fig. 4: Communication and computation infrastructure for the drive safe platform in intelligent transportation cyber-physicalsystem.local and on-board) is Bluetooth IEEE Std 802.15.1 [43].After the aggregation of the sensors data in the sink node, itcommunicates with the edge server over WiFi 802.11n-2009standard [44]. b) Environmental Sensors Network:
The environmentalsensors network consists of three types of environmental sen-sors. We characterize the driver compartment’s environmentby capturing temperature, humidity, and ambient light sensordata. Thus, we consider a gateway node to collect those sensordata that can communicate to the edge server via a wirelessLAN IEEE WiFi 802.11n-2009 standard [44]. This gatewaynode can preprocess and also extract features through the on-gateway computation. c) Camera Sensor Network:
The real-time video is cap-tured by the camera sensor to detect the current activity ofthe vehicle driver while driving. This camera is physicallydeployed in the driver compartment and controlled by the on-board vehicle device. This on-board device can be any smartdevice that is attached to a car dashboard. Smart devices cancommunicate through the wireless LAN IEEE WiFi 802.11n-2009 standard to the multi-access edge server [44].
2) Multi-Access Edge Computing:
The multi-access edgecomputing (MEC) module is deployed in the second layer(bottom-up manner) of the drive safe platform in the IT-CPS. Therefore, this layer can communicate through multiplecommunication protocol for first computation, and low latencyaccess [45]. In the proposed communication and computationinfrastructure, the MEC can communicate through Bluetooth, wireless LAN, and wired LAN for inter-layer communication.The communication protocols are not limited to the aboveprotocols while MEC can communicate through any wirelesscommunication [46], [47] such as LTE, 5G. Further, edgecomputing server provides the computation of several artificialintelligence [48] methods that detect the vehicle driver’s be-havior and distraction for enabling the prevention mechanism.Thus, multi-access edge computing ensures heterogeneity interms of both communication and computation, while severalintelligent systems are decoupled from each other.
3) Intelligent Cyber-physical Transportation Systems:
Toassure the driver safety through the intelligent cyber-physicaltransportation systems, it is imperative to accommodate thecomputation of driver emotion recognition, current activityrecognition, and context-aware recommendation on the edge.Therefore, at the top of the multi-access edge computing, weconsider an intelligent system layer that consists of emotionrecognition, activity recognition, and context fusion engines.Each decision-making engine performs domain-specific AItasks to ensure driver safety while distractions occur duringthe vehicle drive. These engines are deployed in a multi-accessedge computing platform and the edge servers communicatewith the cloud. In particular, a wired backhaul [49] connectionbetween multi-access layer to the cloud.
4) Cloud Services:
To facilitate a safe drive in the con-sidered intelligent transportation system, we design a cloudservice layer. In this work, this layer encompasses the contentrecommendation service and activity recommendation service.
Driver’s activity recognition engine development for activity classificationCapsule Network (CN) Emotion recognition engine for mood classificationConvolutional Neural Network (CNN)Sequential pattern mining of Driver’s mental health and environmental factorsApriori Algorithm Emotion-aware recommendation engine for content recommendationBayesian RecommenderEnvironmental factors extraction for finding sequential pattern with driver’s mental healthMaximum Likelihood Probability ModelVehicle DashboardReal Time Safety NotificationServerClient
Fig. 5: Use case of cognitive behavior mining and recommendation system toward the drive safe in intelligent transportationcyber-physical system.These services ensure the vehicle driver safety as well asemploying driver’s mood repairment for a safe drive. Inparticular, the integration of the cloud-based recommendationservice ensures the robustness of the drive safe platform.
C. Use Case of the Drive Safe Platform
The cognitive behavior of a vehicle driver relies on mentalhealth as well as physical behavior. Further, environmentalfactors also have a crucial role in mental health. Therefore, inthis section, we illustrate the use of the drive safe platformthat can reduce the chance of road accident by intelligentlyanalyzing the vehicle driver’s cognitive behavior.Consider a scenario where a vehicle driver is driving acar with full of concentration without any mental stress. Weassume that this vehicle driver is driving in a safe drivingmanner. However, during the drive, the driver can also checkcell phone, talk to the other passenger, drink, operate theradio, or any other activities that induce distracted drivingpattern (shown in Fig. 1). Therefore, we can classify thedriver activity into ten categories as illustrated in Table I.Besides, the driver activities also depend on the driver’s current TABLE I: Driver activity class [50]
Description Class
Drive with concentration Safe drivingTexting - right Distracted drivingTalking on the phone - right Distracted drivingTexting - left Distracted drivingTalking on the phone - left Distracted drivingOperating the radio Distracted drivingDrinking Distracted drivingReaching behind Distracted drivingHair and makeup Distracted drivingTalking to passenger Distracted driving emotional state [51]. In particular, according to Russell’semotion circumplex [52], the human emotion relies on twovital factors: arousal, and valence. The arousal discretizesthe driver’s brain’s physiological and psychological state thatcan stimulate a sense of organs to a print of perception. Ahigher value of arousal describes high anxiety and excitement,which is the intensity of the driver’s physiological changes.Meanwhile, valence can characterize the driver’s emotion into attractiveness and averseness. Notably, a positive value intuitsgoodness of emotion while negative describes the badness ofthe emotion. Thus, Fig 2 illustrates the relationship betweenarousal and valence with the human mood. In particular, thevalue of arousal and valence represents the drivers mood andalso relies on the indoor compartment environmental factorssuch as light intensity, temperature, and humidity. Therefore,to ensure drive safe in an ITS-CP system, the driver moodrepairment is essential. To accomplish this goal, we designan use case that assures the driver safety during the driving.We illustrate the use case of cognitive behavior mining andrecommendation system of the drive safe platform in Fig 5.In this use case, we consider server and client, where eachvehicle acts as a client, and edge/cloud serves as a server inthe intelligent transportation cyber-physical system. Particu-larly, a pre-trained driver’s activity recognition [5], maximumlikelihood (ML) [53] models for on-compartment environmentobservation, and Bayesian [36] content recommender for moodrepairment are deployed in the on-vehicle device. Meanwhile,the edge server is classifying [34] the driver’s emotion byphysiological sensor observation of the driver. The on-vehicledevice collects those observations through the physiologicalsensors that are placed in the drivers body. Further, sequentialpattern mining of driver current emotion and environmentalfactor will be performed by the server. Such a sequentialpattern mining engine receives inputs as current activity, envi-ronmental context, and driver’s emotion. The server then sendsthe sequential pattern mining result to the on-vehicle devicefor further processing. Thus, the on-vehicle device executes aBayesian content recommender to select the content (i.e., audiosongs) for the driver mood repairment. This content helps thevehicle driver for the mood repairment, reducing the driver’sanxiety toward the safe drive. Subsequently, the on-vehicledevice also notifies the safety message based on real-timedriver activity recognition via the on-board vehicle display.The mental health of the particular vehicle driver improvebased on recommended mood repairment contents. In thecase of intelligent transportation cyber-physical system, eachvehicle follows the same procedure in real-time. To this end,the Bayesian content recommender recommends personalizedcontents (i.e., audios) to each vehicle driver based on thepersonal choice that ensures the personalized behavior miningfor the driver.The cognitive behavior mining and recommendation systemfor the drive safe platform in IT-CPS, we have deployed fiveAI-based models. Capsule network (CN) [54] is consideredfor the vehicle driver’s activity recognition and classification.Meanwhile, a convolutional neural network recognizes humanemotion [34]. Further, a statistical maximum likelihood (ML)[53] model determines the statistical environment factors forthe on-compartment environment (i.e., light intensity, tem-perature, and humidity) of the vehicle. In order to conducta behavior mining of the vehicle driver, we have deployedApriori-based [35] sequential behavioral pattern mining modelto capture the behavior of a driver by analyzing the drivermood, activity and environmental factor. Finally, a personal-ized Bayesian content recommendation [36] engine is devel-oped to repair a vehicle driver’s mood. This ensures a safe driving environment in the IT-CPS. A detailed description ofthe testbed implementation is given in the later section.IV. T
ESTBED I MPLEMENTATION FOR D RIVE S AFE P LATFORM
The drive safe testbed consists of several components,including various sensors with sink node, on-vehicle compu-tational processing units, and edge server. In this work, wehave deployed four kind of physiological sensors, three typeof environmental sensors and a camera sensor to capture thephysiological, environmental, and activity data, respectively.We consider Raspberry Pi 3B Model as a environment sensorgateway (Raspbian OS), a core i3-7100 processor with a speedof 3.9 GHz along with 8 GB of RAM as an on-vehicle device(Windows 10 OS), and a core i7 processor with a speed of 2.6GHz along with 32 GB of RAM as an edge server (Ubuntu16.04 LTS OS). We illustrate the testbed environment in Fig6.
A. Sensor Data Collection and Preprocessing
In this work, we collect observational data from eightsensors to determine the driver’s mood. Meanwhile, we rec-ommend the content to the vehicle driver for mood repairmentthat accomplishes the goal of drive a safe platform in IT-CPStoward driver safety. We have developed this data collectionsystem by combining three IoT networks. A body area network(BAN) is created to collect physiological data of the vehicledriver. Further, we deployed environmental sensor data alongwith a gateway design. Finally, we capture real-time camerasensor data using an on-vehicle camera. A brief discussion ofthese sensor data collections and preprocessing mechanisms isdiscussed in the following section.
1) Physiological Sensors:
A BAN is formed to collect thephysiological sensor data through a sink node. The physio-logical sensor communicate to the sink node via Bluetooth,and four sensors are attached with the wire connection to thesink node. Thus, to ensure the high accuracy measurements ofthe bio-information [56], [57] from the driver, we consideredElectroMyoGram, Electrocardiogram, Electro-dermal Activity,and Electroencephalography. The physiological sensors andsensor hub is depicted in Fig. 7. In particular, a BITalino(r)evolution Board Kit BLE [55] physiological sensors moduleis used for BAN. a) EMG:
The trapezius muscle’s electrical activity ismeasured via the EMG [58] surface, which comprises theprevalent characteristics of mental stress and irritation. Aspossible features of the EMG signal, the mean amplitude,mean and median frequency, average EMG gaps, and per-centile of EMG gaps are extracted. At first, the signals receivedfrom the EMG are preprocessed for denoising and baselineremoval and the applied sampling rate is
Hz. The bandpassfiltering approach is applied to cut off the low frequenciesthat removes the baseline wanders and also discards high-frequency noises. The upper trapezius muscles EMG is usefulfor measuring anxiety and stressful mode of the subject. Thus,the stress will appear when the sympathetic nervous system isactivated. One of the consequences is elevating muscle tone,
ECG, EMG, EEG, EDA hardware moduleVehicle dashboard Environmental sensor (light, temperature and humidity) with Raspberry Pi 3B moduleReal-time video is capturing by webcam
Fig. 6: Testbed for drive safe platform in intelligent transportation cyber-physical system.Fig. 7: BITalino (r)evolution Board Kit BLE physiologicalsensors module [55].which sometimes leads to shivering. Capturing this muscletone elevation could then be a predictor of the level of mentalstress. During a stressful situation, the studied [28], [58] showsignificantly higher amplitudes of the EMG signals than restand fewer gaps during stress. In particular, mean and medianfrequencies were significantly lower during stress than rest.EMG features correlated with subjectively indicated stresslevels of the vehicle driver.The previous state-of-the-art study [58] indicated inducedstress, which resulted in changes in the trapezius muscle’sEMG signals. These EMG changes included an increase ofamplitude and a decrease in the number of recorded gaps. Bothare indications of elevation of muscle activity, caused by thestress tasks [59]. However, by measuring the stress level, wecan infer the subject’s negative mood, which is an indicationof negative valence and negative aerosol. b) ECG:
To improve the accuracy of emotion classifica-tion, we collect ECG [60] sensor observation form the driver while driving and listening to audio stimuli. Therefore, wehave analyzed the received ECG signals to extract discrimina-tive features. To do this, we analyze the R-R intervals, QRSamplitude and duration, QT interval, QTc interval, R/S ratio,and R-peak from the driver’s ECG signals. The mean, variance,first and second derivatives are also applied with respect totime. As a result, we can utilize those patterns as a functionalfeature strongly affects valence and aerosol.We collect ECG signals through the sensor hub and thenpre-processed for denoising and baseline removing. In par-ticular, we have applied the sampling rate to
Hz. Thebandpass filtering approach is being used to cut off the lowfrequencies that eliminate the baseline wanders and discardhigh-frequency noises. Eventually, we do the smoothing bytaking the sample window on the moving average. c) EDA: One of the most critical indicators for emotionalarousal is the electrodermal function. The EDA is derivedfrom the autonomic stimulation of skin sweat glands. Theemotional pressure causes the sweating on hands and feet. TheEDA data reveal distinctive patterns, which can be quantifiedstatistically, while we are emotionally aroused. The obtainedEDA bio-signals for the analysis of the emotions incorporatespecific noise types. Using bandpass filtering, we filter thereceived signals from EDA biosensor that can isolate thenoises. The electrical potential is applied on two points ofskin contact to measure the Skin Conductance Level (SCL),and the current flow between those points is measured. TheSkin Conductance Response (SCR) is calculated from twoconsecutive zero-crossing behaviors. The EDA is the mostuseful index that can identify emotional and cognitive statesbecause parasympathetic behavior does not contaminate it. d) EEG:
To capture the brain activity for a particularevent, we consider Electroencephalography [61] signal fromthe vehicle driver. We can find the voltage fluctuations occurby an ionic current within the drivers brain’s neurons.The
EEG sensor’s electrical signal discretizes a potential valenceby placing it to the scalp of the driver. We capture this signalwith a sampling rate of Hz.
Environmental Testbed Interfacing
Fig. 8: Testbed setup for environmental sensor with RaspberryPi 3B module.
Environmental Testbed Interfacing
Fig. 9: Environmental sensor data collection.
2) Environmental Sensor:
In the first sensor module, wedescribe how to collect physiological sensor data of the vehicledriver during the driving. To capture the on-compartmentenvironment data, we consider an ambient light sensor, tem-perature, and humidity sensor. Thus, such sensors are attachedwith a Raspberry Pi 3B platform that acts as a gateway nodeof the considered cyber-physical system. In Figs 8 and 9, weillustrate the environmental sensor data acquisition system.In this environmental data acquisition system, the sensoryobservations are preprocessed and the features are extractedfrom the collected environmental sensory data. In particular,the environmental sensors (e.g., ambient light sensor, temper-ature and humidity sensor) data are recorded and executes themaximum likelihood probability model for preprocessing. Wecapture data points for each sensor in one observationalperiod. a) Ambient Light Sensor: The ambient light sensor isused to detect the brightness of the driver’s compartmentand thus, provides an ambient environment. The features areextracted for each time window, such features include Mean(average of the data in the time window), Min (Minimum value in the time window), Max (Maximum value in thetime window), Std (Standard deviation of the data in the timewindow), End-Start diff (the difference between the last valueand the first value in the time window), and Max-Min diff (thedifference between the maximum and minimum values in thetime window). b) Temperature Sensor:
The temperature sensor cancapture the current temperature of the driver’s compartmentenvironment. In particular, the extracted features of the sensorare time, sample number, temperature, and voltage. c) Humidity Sensor:
The humidity sensor indicates thelikelihood of precipitation, dew or fog in the user environment.The extracted features of the sensor are RH (Relative Humid-ity), PPM (Parts Per Million), D/F PT (Dew/Frost Point), AB(Absolute Humidity), and timestamp.
3) Camera Sensor:
To capture the vehicle driver activityimage, we consider webcam with × resolutions. Wecapture the image frame with an interval of ms. This imagedirectly stores in the on-vehicle device and we preprocess as × × . The preprocessed driver image provide threechannel color signal that are red, green, blue. B. AI-based Module Development
The Drive safe platform consists of several AI-based mod-ules. In particular, These modules include capsule network,convolutional neural network, Apriori, and Bayesian network(BN) for driver activity recognition, driver mood mining, se-quential cognitive pattern mining, and driver mood repairmentcontent recommender, respectively. A combined effort of thoseAI modules fulfill the goal of intelligent transportation cyber-physical system towards the cognitive-behavioral mining ofthe vehicle drive. In particular, a personalized safe drivingcyber-physical system is imposed that reduces the risk of roadaccident due to the distracted driving.
1) Driver Activity Recognition:
In order to driver activityrecognition, we consider per-trained driver activity AI model.That model trained by capsule network [54] using state of-the-art state farm distracted driver detection dataset [50]. Thisdataset consists of ten different drive activities during thedriving as shown in Table I and two major classes: safedriving, and distracted driving. Additionally, we have used allten classes (seen in Table I) to fuse the driver activity withmental state for the drive safe platform in IT-CPS.TABLE II: Capsule network parameters [5]
Parameter Primary convolution Primary capsule
Filters/Capsules
256 32 kernel
Strides
Dimension of capsule x To train the pre-trained driver activity recognition model,we stratify the dataset [50] as a distribution of each classin train, test and validation approximately in equal quantity.Thus, we consider training, validation, and testing samples , , and , respectively, during the pre-trained CNmodel training process. The drivers physical activity during the DatasetTraining Set Testing SetCapsule Net Model Classification ResultData Preprocessing Validation Set
10 Classes: c0: safe drivingc1: Texting - rightc2: Talking on the phone - rightc3: Texting - leftc4: Talking on the phone - leftc5: Operating the radioc6: Drinkingc7: Reaching behindc8: Hair and makeupc9: Talking to passenger
Primary ConvolutionInput ImagesPrimary CapsuleDigit CapsuleDriver Activity ClassesReshape: 128 X 128 X 3 (rgb)Relu Activation functionSquash Activation functionDynamic Routing Algorithm
Fig. 10: Capsule network design for driver activity recognition.driving relies on sharp changes of the body parts; especially,on the angle of the movement of face, rest, nick, etc. Thus,CN [54] can handle those features of the body parts that iscaused by sharp movement better than the CNN [25] due toits pooling mechanism. The driver activity recognition pre-trained model is depicted in Fig. 10 and a major configurationparameters of the CN model is shown in Table II.Fig. 11: Receiver operating characteristic (ROC) curve fordriver activity recognition.TABLE III: Activity classification report for real-time driverdata
Name Precision Recall F1-score Support
Safe driving .
65 0 .
85 0 .
73 1300
Distracted driving .
94 0 .
84 0 .
89 3800
Micro avg .
79 0 .
84 0 .
81 5100
Weighted avg .
87 0 .
84 0 .
85 5100
We have exported and deploy the pre-trained driver activityrecognition model on-vehicle dashboard client application.Therefore, we provide driver activity recognition feedbackin a ms interval to driver via the developed on-vehicledashboard client application. Meanwhile, we sent the activityID for the further fusion with the driver mood. Receiver Fig. 12: Precision-recall curve for driver activity recognition.TABLE IV: Activity classification report using test set
Name Precision Recall F1-score Support
Safe driving .
94 1 .
00 0 .
97 88
Distracted driving .
00 0 .
99 1 .
00 912
Micro avg .
97 1 .
00 0 .
98 1000
Weighted avg .
99 0 .
99 0 .
99 1000 operating characteristic (ROC) and Precision-recall curves areillustrated in Figs. 11, and 12, respectively, for the driveractivity recognition. In addition, the real-time and training ofdriver activity classification report are given in Tables III andIV, respectively. The accuracy of the real-time driver activityrecognition is around due to the different camera positionand vehicle jerking.
2) Driver Mood Mining:
The goal of a vehicle drivermood mining is to find a cognitive relationship based on thephysiological sensory observations. In particular, by extractingfeatures of physiological observations that are captured byEMG, ECG, EDA, and EEG sensors from the vehicle driveBAN. These observations will be classified into to for thearousal and valence values [62], [63]. A relationship amongthe driver mood, arousal, and valence scores are shown in Table V.
Convolutional Neural Network(CNN) based Emotion Recognition (3/3)
Fig. 13: CNN-based emotion wheel of personalized vehicledriver.In this work, we deploy a learning model that can effectivelyfind the arousal and valence from the driver’s biosensor data.Therefore, we have used our CNN-based [34] mood miningframework that can effectively determine arousal and valencefor the driver. In addition, the considered CNN-based moodmining model is trained by the DEAP dataset [64]. In thiswork, we feed an additional EEG sensor that can capturethe brain activity of the vehicle driver during the driving.Thus, in an observational period, we feed data pointsfor each EMG, ECG, EDA, and EEG sensors and examine inthe edge server. The output of this CNN model consists of apair of integer values that are valence and arousal. In order tocognitive-behavioral mining of the driver during the driving,we fuse driver physical activity with the mental state byapplying the sequential pattern mining mechanism. That caninfer a relationship between the mental and physical activity ofthe vehicle driver toward a safe driving event. Fig. 13 depictsthe output of the mood mining of a particular driver.Fig. 14: Apriori transaction example.
3) Sequential Pattern Mining:
The driver activity, on-compartment vehicle environmental likelihood and drivermood (i.e., valence and arousal) are fused with the audiocontent in order to generate the cognitive-behavioral pattern of the vehicle driver status. In particular, this cognitive-behavioralsequential pattern map with the audio content candidate forthe driver mood repairment during the drive. Thus, we havedeveloped association rule mining by following the concept ofthe Apriori [35] model. Through the association rule mining,the user behavioral pattern is generating so that the system canrecommend a candidate set of contents based on the contextualinformation.In the association rule mining, each of the patterns isconsidered as transactions (as seen in Fig. 14), and based onthe confidence and support value the fused data is sent to theemotion aware recommender. In this pattern mining method,we consider minimum support count . so that we cancapture the significant patterns among the physical activity anddriver’s physiological behavior. In particular, we determine asequence of tuple (cid:104) activity , arousal , valence , content (cid:105) that provides a set of rules based on minimum supportcount and minimum confidence value. In other words,we generate association rules that represents a mapping (cid:104) activity , arousal , valence (cid:105) → content between thecognitive behavior of the driver and candidate content. Weemploy this sequential pattern rules to a Bayesian networkthat can recommend an appropriate content towards the drivermood repairment.Fig. 15: Bayesian interface model for content recommenda-tion.
4) Content Recommendation for Mood Repairment:
Oneof the key objectives of our proposed drive safe platform isthe recommendation of content for the mood repairment ofthe vehicle driver. To meet the goal, we design a Bayesianinference based recommendation system that can recommendthe content to the vehicle driver. A Bayesian network [36] is aprobabilistic graphical model that represents a set of randomvariables and their conditional dependencies via a graph. Thehidden affective states and the contents can be represented asthe Bayesian network as shown in Fig. 15.A set { Bored, M elancholic, Longing, . . . , Excited } of N states can be mathematically representedas S = { s , s , s , . . . , s N } . The contents are S = { Content , Content , . . . , Content N } , which canbe mathematically represented as C = { c , c , . . . , c N } . TheBayesian inference [36] is a method of statistical inferencein which Bayes’ theorem is used. That can update theprobability for a hypothesis as more evidence or informationbecomes available. Using the Bayesian theory, we candetermine the probability of future affective states basedon the stimuli content. We consider current affective state TABLE V: Driver mood category [9]
Valence Arousal Mood Valence Arousal Mood Valence Arousal Mood
Dejected
Dejected
Doubtful
Wavering
Wavering
Anxious
Gloomy
Depressed
Ashamed
Gloomy
Uncomfortable
Taken back
Disappointed
Startled
Distrustful
Bitter
Insulted
Frustrated
Loathing
Disgusted
Distressed
Contemptuous
Defiant
Hateful
Obstructive
Obstructive
Angry
Droopy
Tired
Sleepy
Bored
Melancholic
Conscientious
Embarrassed
Languid
Longing
Feel guilt
Worried
Confident
Apathetic
Neutral
Impressed
Suspicious
Impatient
Passionate
Indignant
Jealous
Feeling superior
Annoyed
Afraid
Conceited
Hostile
Tensed
Lusting
Servient
Compassionate
Conductive
Polite
Peaceful
Calm
Contemplative
Friendly
Serene
Attentive
Hopeful
Glad
Interested
Feel Well
Pleased
Amused
Determined
Happy
Convinced
Enthusiastic
Enthusiastic
Ambitious
Excited
Courageous
Aroused/Astonied
Adventurous
Self confident P ( s i | c j , s i − ) , where s i is the future affective state, c j represents current audio content (listing), and s i − denotesthe current affective state. However, our goal is to findout a sequence of the content list for smooth mood swingfrom negative to positive affects. Such sequence, we havedetermined through maximum likelihood estimation throughBayesian estimator like the Viterbi algorithm [65]. In order tooperate the proposed intelligent drive safe platform, we havedeveloped an on-vehicle dashboard as a client application.That can effectively and autonomously provide feedback tothe vehicle drive while driving. C. On-vehicle Dashboard
The role of the developed on-vehicle dashboard is to guidethe vehicle driver from the distracted driving. In particular, thisdashboard application can operate without the interaction ofthe vehicle driver during the driving while autonomously play-ing the recommended audio content for the driver’s affectivestate. Further, notify the driver based on on-compartment ac-tivity toward the drive safe. We demonstrate the developed on-vehicle dashboard client application of the drive safe platformin Figs. 16a, 16b, 16c, and 16d. This dashboard also visualizesthe mood swing state that assures the driver’s lifelog.To developed the on-vehicle dashboard UX, we have fol-lowed the recognition rather than recall
NIELSENS heuristic[66] UX design principle. In which, the drivers can easilydetect their current mood when they watch the dashboard.They actually use the recognition usability heuristic as allthe cues are visible in the dashboard. For example, whenthe drivers mood is changed, the UI automatically detectshis current status. After that, it can change the audio based on the drivers mood. In the UI, the driver can also cometo know that either the driving is safe or there is a warningduring driving by watching the dashboard. The vehicle drivercan easily understand the situation with the clear interactionof the developed on-vehicle dashboard and does not need torethink it. Apart from that, the developed on-vehicle dashboardfor driver’s affective mood repairment UX satisfies the otherNIELSEN’S usability heuristics [66] and a relationship be-tween those usability heuristics with the developed prototypeare described in Table VI. Additionally, the output of thison-vehicle dashboard ensures a personalized recommendedsystem so that this dive safe platform maintain the driver’spersonal data privacy.The drive safe platform algorithmic procedure illustrates inFig. 17. The entire procedure not only meets the requirementof pervasive computing with the IT-CPS but also ensure theroad safety during the drive. A brief discussion and keyfindings of the drive safe platform are described in the latersection.V. E
XPERIMENTAL D ISCUSSION AND K EY F INDINGS
The goal of this work is to provide a proof of concept (PoC)for the cognitive-behavioral mining of the vehicle driver inintelligent transportation cyber-physical system. Therefore, wehave employed five AI and statistical-based models to infer thecognitive-behavioral mining of a vehicle driver that can ensuresafe driving during the drive. Thus, in this paper, the perfor-mance analysis of each individual model is out of the scope.Hence, we have performed a qualitative evaluation for thedeveloped drive safe platform. Thus, in this section, first, wewill discuss statistical analysis, ANOVA test, and confidence (a) Safe driving with all features. (b) Distracted driving with all features.(c) Distracted driving with the previous mood. (d) Audio changes with mood changed. Fig. 16: Evaluation environment of vehicle dashboard.TABLE VI: NIELSEN’S usability heuristics meet with the developed driver dashboard UX
Heuristic Name Meets/Not? Comments
Visibility of system status Yes The on-vehicle dashboard can interact with users by always visible and visual feedback.User canrecognize the system is working or not by using precise and reasonable feedback.Match between system andthe real world Yes The on-vehicle dashboard prototype can interact with vehicle driver and internal system functions,such as activity recognition, content recommender, and so on.Flexibility and efficiency ofuse Yes The developed prototype is flexible that helps drivers to use in an efficient way as expert users’ andalso friendly to the new system user.Consistency and standards Yes Usages of the on-vehicle dashboard are predictable and learnable to the vehicle driver due tointeractive follow of functionality.Error Prevention Yes Provides proper feedback messages to users.Aesthetic and MinimalistDesign Yes Lifelog and visualization of the driver mood fluctuation are keeping the content for communicativeto the driver.Help users recognize, diag-nose, and recover from er-rors No Will be added in the future works.Help and documentation No Will be added in the future works.User control and freedom No Will be added in the future works. interval analysis based on user study for the developed drivesafe platform. Then we will provide some of the key findingsand technical challenges in terms of prototype developmentfor the drive safe platform in intelligent transportation cyber-physical system.
A. Qualitative Evaluation of Drive Safe1) Experimentation Design:
In this section, we discuss adetailed procedure for evaluation the developed drive safeprototype of the IT-CPS. a) Population:
In this experiment, we consider five par-ticipants for testing the usability and effectiveness of thedeveloped drive safe prototype of the IT-CPS. We have not TABLE VII: Summary of the technologies and methodologies for the proposed cognitive-behavioral mining in intelligenttransportation cyber-physical system
Purpose Method Technology
Driver activity recognition Capsule network Deep neural network-based AI modelDriver mood mining Convolutional neural network Deep neural network-based AI modelEnvironmental features analysis for fusion Maximum likelihood probability model Statistical modelSequential behavioural pattern mining Apriori algorithm Intelligent pattern mining modelContent recommendation for affective mood repairment Bayesian network Statistical AI modelOn-vehicle dashboard application Recognition rather than the recall Heuristics for UI designModel computation N/A Multi-access edge computing and on-deviceCommunication Multi-thread asynchronous Bluetooth, WiFi, Wired (LAN)
Fig. 17: Drive safe platform algorithmic procedure in intelli-gent transportation cyber-physical system.divided the participants into groups since all of the users of thedeveloped system are considered as novices. The developeddrive safe prototype of the IT-CPS is considered as a newfiled. Therefore, most of the users are not experts. b) Hypotheses:
In this work, the primary hypotheses areto operate the developed drive safe prototype of the IT-CPSwithout any physical interaction (i.e., without input from theuser) by the vehicle driver during the driving. Meanwhile,the system should autonomously play the recommended audiocontent for the drivers affective state and notifying the driverbased on on-compartment activity to ensure safe driving.Secondary hypotheses are considered to the visualization ofthe current activity, mood swing state, and other sensory data(i.e., physiological sensors, camera sensors, and environmentalsensors), as well as the drivers lifelog. c) Study Conditions:
In this experiment, our goal isto conduct a one-way within-subject design to evaluate thedeveloped UX prototype. In which, we design six individualquestions based on NIELSENS usability heuristics [66] to perform the usability test of the developed drive safe prototypeof the IT-CPS. Therefore, we consider independent variablebased on the score value of each question and all six questionswill be scored by the participants ([See Appendix A]). Thescore level are as follows, 1: Not good, 2: Somehow good,3: Good, 4: Satisfactory, 5: Very satisfactory. Furthermore,in order to evaluate the statistical confidence level based onthe user feedback, we design another eight questions ([SeeAppendix B] with a binary level, 0: No, 1: Yes.
2) Experimental Results:
We analyze the users evaluationdata on python platform. We have performed several statisticalanalysis using the users feedback data to evaluate the devel-oped drive safe prototype of IT-CPS. In particular, we haveperformed statistical analysis (i.e., mean, variance, and stan-dard deviation), ANOVA test, and confidence interval analysisusing five well-known methods. An error bar from the standarddeviation and mean is represented in Fig. 18. Meanwhile, wedescribe the mean, variance, and standard deviation of theusability test scores in Table VIII. Further, we have conducteda one-way analysis of variance (ANOVA) based on the userfeedback scores of the usability test of the save drive prototype.Table IX illustrates the outcome of that test. In this ANOVAtest, we have found the P-value as . that is less thanthe . . Therefore, our usability test achieve the statisticalsignificance for the conducted experiment. In case of the F-value (i.e., . ), we can say the variance between the meansof among populations significantly different. In other words,the considered hypotheses have the statistical significance thatcan evaluate the developed drive safe prototype of IT-CPS witha high reliability.TABLE VIII: Mean, Variance, and Standard deviation of theusability test scores Users/Overall Mean Variance Standard Deviation A .
83 0 .
17 0 . B . . . C .
67 0 .
27 0 . D .
67 0 .
27 0 . E .
66 3 .
87 1 . Overall .
26 1 .
51 1 . Finally, we have conducted confidence interval analysis forthe effectiveness analysis. In order to this, we have performfive kinds of analysis, these include Asymptotic (Wald), Bino-mial (Clopper-Pearson), Wilson Score interval, Agresti-Coull(adjusted Wald) interval, and Jeffreys interval [67]. We have Fig. 18: Standard deviation analysis for the users feedbackusing error bar analysis.TABLE IX: ANOVA test result
Name Value
Sum of Squares Residual . Sum of Squares Model . Mean Square Residual . Mean Square Explained . F-value . P-value . TABLE X: Confidence interval analysis
Method Prevalence Lower
CL Upper CL Normalapprox. . . . Clopper-Pearsonexact . . . Wilson . . . Jeffreys . . . Agresti-Coull . . . found positive out of samples from the users. Wepresent our confidence analysis result in Table X. We can seefor an upper confidence level (CL), the average outcomemore than that indicates the significance of the developeddrive safe prototype of IT-CPS.In summary, the considered hypotheses have shown thesignificance of the test design that can efficiently evaluatethe usability and effectiveness of the developed drive safeprototype of IT-CPS. In particular, the P-value . (i.e., < . ) of the ANOVA test ensures that the hypotheses, anddeveloped system are completely aligned with user studies.Further, in the confidence interval analysis, we have foundthe prevalence value around . for a confidence thatassures the effectiveness of the proposed drive safe prototypeof IT-CPS. B. Key Findings
Summary of key technical challenges as follows: • First, the data acquisition rate of the considered sensorsis varied between each of the sensors as well as the datatransfer rate also differ among the three data acquisitionIoT networks. As a result, the synchronization of sensorsdata collection is one of the major challenges. In fact, wecan not control the data acquisition rate of the sensorsdue to the individual characteristics of each sensor. Inorder to overcome this challenge, we apply a multi-thread asynchronous mechanism that can collect and storesensors data independently.
Remark.
The sensors we have used to collect thephysiological observations for the drive that can be acommercial wearable product. In fact, that product willreduce the physical hazard for the driver to wear sensors. • Second, in the proposed drive safe platform, the driveractivity recognition is one of the crucial requirements todetect the distractions of the vehicle drivers. Therefore,we have deployed a pre-trained driver activity recognitionmodel in the on-vehicle device. The accuracy of thatdriver distraction detection is around due to thedifferent camera positions for each of the vehicles. Hence,we can improve the model accuracy by deploying anon-vehicle fine tuning model for each vehicle. In thatcase, the pre-trained driver activity model will act asa generalized model, and each on-vehicle fine tuningmodel will be a personalized model for each vehicledriver. However, the main drawback of such mechanismis the computational cost for each on-vehicle device,where the energy consumption of the on-vehicle deviceis proportional to the computational cost.
Remark.
A tradeoff between accuracy and computa-tional cost of the on-vehicle device toward the vehicledriver activity recognition. • Third, the evaluation of driver mood mining from thephysiological observations is quite challenging due tothe vast sensory observational data generated every twominutes of an observational time period. The two minutesduration is the ideal time to swing the human emotion[62]–[64]. Therefore, we send preprocessed data of eachphysiological observational period to the edge server viathe wireless communications along with environmentaland activity logs. The driver mood mining is validatedby the state-of-the art DEAP dataset [64] in the edgeserver to find the arousal and valence for the driver. Infact, another role of the edge server is to calculate asequential cognitive behavior mining that map with a listof recommended audio contents for the driver’s moodrepairment. The edge server then only send a list ofsequential behavioral pattern mining data to the particularon-vehicle device for personalized decision making.
Remark.
The proposed cognitive-behavioral mining hasestablished multi-access edge computing for intelligenttransportation cyber-physical system and ensure the per-sonalized decision for each vehicle driver. • Last but not least, the developed on-vehicle device ap-plication can determine a personalized affective state forthe vehicle driver by executing a Bayesian network from the lifelog of the individual driver. Further, the on-boardvehicle display provides three key functionality, i) safetymessage to the driver each ms interval, ii) moodswinging statistics along with lifelog, and iii) the driver’smood repairement contents based on cognitive-behavioralmining in an autonomous manner without any interactionby vehicle driver during the driving. Remark.
The proposed on-vehicle device applicationassures the recognition rather than the recall [68] basedrecommendation toward the drive safe for the vehicledriver in an intelligent transportation cyber-physical sys-tem.
A summary of the considered technologies and methodolo-gies are illustrated in Table VII. The proposed drive safe plat-form meets the cutting-edge technologies and methodologiesfor the cognitive-behavioral mining of the vehicle driver thatfulfills the goal of an intelligent transportation cyber-physicalsystem. VI. C
ONCLUSION
In this work, we have introduced a new drive safe platformthat can meet the requirements of intelligent transportationcyber-physical systems towards road safety. In particular, wehave proposed a cognitive-behavioral mining based driversafety platform with a working prototype that converges withthe goal of the academic and industry research. Further,the driver safety platform is supported by various cutting-edge technologies, such as cognitive-behavioral mining, multi-access edge computing, artificial intelligence, and heteroge-neous communications. The user study has also shown thesignificance of the developed drive safe prototype of IT-CPS.In the future, we will focus on improving the computationaland communication latency for the vehicle driver’s cognitive-behavioral mining by designing a more robust mechanism.A
PPENDIX AO BSERVATIONAL QUESTIONNAIRES FOR THE USABILITYTEST OF DRIVE SAFE PROTOTYPE
TABLE XI: Observational questionnaires for the usability testof drive safe prototype
Q.ID Observation 1 2 3 4 5
H1 Does the on-vehicle dashboard can in-teract with users by always visible andvisual feedback?H2 Does the system can interact with vehi-cle driver and internal system functions,such as activity recognition, contentrecommender, and so on?H3 Does the system automatically providethe recommendation?H4 Does the system behavior is pre-dictable?H5 Does the system provide audio and vi-sual feedback to a driver?H6 Does the system visualize the driversmood swing?
1: Not good, 2: Somehow good, 3: Good, 4: Satisfactory,5: Very satisfactory A
PPENDIX BQ UESTIONNAIRES FOR THE OVERALL SYSTEMEVALUATION OF DRIVE SAFE PROTOTYPE
TABLE XII: Questionnaires for the overall system evaluationof drive safe prototype
Q.ID Question 0 1
Q1 Would you recommend affective mood repairmenton-vehicle dashboard to a friend?Q2 Would you think affective mood repairment on-vehicle dashboard is helpful?Q3 Would you think the system does not affect duringthe driving?Q4 Does the mood swing state chart is helpful for mentalsafety measurement?Q5 Does the autonomous audio play is a suitable featureduring the driving?Q6 Does the system visualize the driver’s mood swing?Q7 Does the system is useful for all kind of vehicles?Q8 Does the system easy to use?
0: No, 1: Yes R
EFERENCES[1] W. H. Organization et al. , “Global status report on road safety 2018:Summary,” World Health Organization, Tech. Rep., 2018.[2] G. Fountas, S. S. Pantangi, S. S. Ahmed, U. Eker, and P. C. Anasta-sopoulos, “Factors affecting perceived and observed aggressive drivingbehavior: An empirical analysis of driver fatigue, and distracted driving,”2019.[3] N. Qiao and T. M. Bell, “State all-driver distracted driving laws andhigh school students’ texting while driving behavior,”
Traffic injuryprevention , vol. 17, no. 1, pp. 5–8, 2016.[4] S. S. Alavi, M. R. Mohammadi, H. Souri, S. M. Kalhori, F. Jannatifard,and G. Sepahbodi, “Personality, driving behavior and mental disordersfactors as predictors of road traffic accidents based on logistic regres-sion,”
Iranian journal of medical sciences , vol. 42, no. 1, p. 24, 2017.[5] M. S. Munir, S. F. Abedin, K. Kim, and C. S. Hong, “Towards edgeintelligence: Real-time driver safety in smart transportation system,”
Korea Computer Congress, (KCC) , pp. 1336–1338, 2019.[6] R. Rajkumar, I. Lee, L. Sha, and J. Stankovic, “Cyber-physical systems:the next computing revolution,” in
Design Automation Conference .IEEE, 2010, pp. 731–736.[7] W. Chang, S. Burton, C.-W. Lin, Q. Zhu, L. Gauerhof, and J. McDer-mid, “Intelligent and connected cyber-physical systems: A perspectivefrom connected autonomous vehicles,” in
Intelligent Internet of Things .Springer, 2020, pp. 357–392.[8] Y. Feng, X. An, and S. Li, “Application of context-aware in intelligenttransportation cps,” in .IEEE, 2017, pp. 7577–7581.[9] K. R. Scherer, “What are emotions? and how can they be measured?”
Social science information , vol. 44, no. 4, pp. 695–729, 2005.[10] A. Ferdowsi, U. Challita, and W. Saad, “Deep learning for reliable mo-bile edge analytics in intelligent transportation systems: An overview,” ieee vehicular technology magazine , vol. 14, no. 1, pp. 62–70, 2019.[11] L. U. Khan, I. Yaqoob, N. H. Tran, S. A. Kazmi, T. N. Dang, and C. S.Hong, “Edge computing enabled smart cities: A comprehensive survey,”
IEEE Internet of Things Journal , 2020.[12] X. Cheng, C. Chen, W. Zhang, and Y. Yang, “5g-enabled cooperativeintelligent vehicular (5genciv) framework: When benz meets marconi,”
IEEE Intelligent Systems , vol. 32, no. 3, pp. 53–59, 2017.[13] S. Chen, J. Hu, Y. Shi, Y. Peng, J. Fang, R. Zhao, and L. Zhao, “Vehicle-to-everything (v2x) services supported by lte-based systems and 5g,”
IEEE Communications Standards Magazine , vol. 1, no. 2, pp. 70–76,2017.[14] R. C. Moioli, P. H. Nardelli, M. T. Barros, W. Saad, A. Hekmatmanesh,P. G´oria, A. S. de Sena, M. Dzaferagic, H. Siljak, W. van Leekwijck et al. , “Neurosciences and 6g: Lessons from and needs of communicativebrains,” arXiv preprint arXiv:2004.01834 , 2020. [15] Q. Liu, B. Kang, K. Yu, X. Qi, J. Li, S. Wang, and H.-A. Li, “Contour-maintaining-based image adaption for an efficient ambulance servicein intelligent transportation systems,” IEEE Access , vol. 8, pp. 12 644–12 654, 2020.[16] A. Ndikumana, N. H. Tran, K. T. Kim, C. S. Hong et al. , “Deep learningbased caching for self-driving cars in multi-access edge computing,”
IEEE Transactions on Intelligent Transportation Systems , 2020.[17] J. Zhang, F.-Y. Wang, K. Wang, W.-H. Lin, X. Xu, and C. Chen, “Data-driven intelligent transportation systems: A survey,”
IEEE Transactionson Intelligent Transportation Systems , vol. 12, no. 4, pp. 1624–1639,2011.[18] K. Sadeghi, A. Banerjee, J. Sohankar, and S. K. Gupta, “Safedrive: Anautonomous driver safety application in aware cities,” in . IEEE, 2016, pp. 1–6.[19] Y. Xing, C. Lv, and D. Cao, “Personalized vehicle trajectory predictionbased on joint time series modeling for connected vehicles,”
IEEETransactions on Vehicular Technology , 2019.[20] M. Malinverno, J. Mangues-Bafalluy, C. Casetti, C. F. Chiasserini,M. Requena-Esteso, and J. Baranda, “An edge-based framework forenhanced road safety of connected cars,”
IEEE Access , 2020.[21] A. Ingale, P. Sahu, R. Bajpai, A. Maji, and A. Sarkar, “Understand-ing driver behavior at intersection for mixed traffic conditions usingquestionnaire survey,” in
Transportation Research . Springer, 2020, pp.647–661.[22] M. Gjoreski, M. Gams, M. Luˇstrek, P. Genc, J.-U. Garbas, and T. Hassan,“Machine learning and end-to-end deep learning for monitoring driverdistractions from physiological and visual signals,”
IEEE Access , 2020.[23] Y. Liang, M. L. Reyes, and J. D. Lee, “Real-time detection of drivercognitive distraction using support vector machines,”
IEEE transactionson intelligent transportation systems , vol. 8, no. 2, pp. 340–350, 2007.[24] M. R. Arefin, F. Makhmudkhujaev, O. Chae, and J. Kim, “Aggregatingcnn and hog features for real-time distracted driver detection,” in .IEEE, 2019, pp. 1–3.[25] Y. Xing, C. Lv, H. Wang, D. Cao, E. Velenis, and F.-Y. Wang, “Driveractivity recognition for intelligent vehicles: A deep learning approach,”
IEEE Transactions on Vehicular Technology , vol. 68, no. 6, pp. 5379–5390, 2019.[26] G. Li, F. Zhu, X. Qu, B. Cheng, S. Li, and P. Green, “Driving styleclassification based on driving operational pictures,”
IEEE Access , vol. 7,pp. 90 180–90 189, 2019.[27] A. Nˇemcov´a, V. Svozilov´a, K. Bucsuh´azy, R. Sm´ıˇsek, M. M´ezl,B. Hesko, M. Bel´ak, M. Bil´ık, P. Maxera, M. Seitl et al. , “Multimodalfeatures for detection of driver stress and fatigue,”
IEEE Transactionson Intelligent Transportation Systems , 2020.[28] J. A. Healey and R. W. Picard, “Detecting stress during real-world driv-ing tasks using physiological sensors,”
IEEE Transactions on intelligenttransportation systems , vol. 6, no. 2, pp. 156–166, 2005.[29] A. S. Le, T. Suzuki, and H. Aoki, “Evaluating driver cognitive distractionby eye tracking: From simulator to driving,”
Transportation ResearchInterdisciplinary Perspectives , p. 100087, 2019.[30] S. V. Deshmukh and O. Dehzangi, “Characterization and identificationof driver distraction during naturalistic driving: An analysis of ecgdynamics,” in
Advances in Body Area Networks I . Springer, 2019,pp. 1–13.[31] M. Ali, F. Al Machot, A. H. Mosa, and K. Kyamakya, “Cnn basedsubject-independent driver emotion recognition system involving phys-iological signals for adas,” in
Advanced Microsystems for AutomotiveApplications 2016 . Springer, 2016, pp. 125–138.[32] O. Dehzangi and M. Taherisadr, “Eeg based driver inattention identifi-cation via feature profiling and dimensionality reduction,” in
Advancesin Body Area Networks I . Springer, 2019, pp. 107–121.[33] M. S. Munir, S. F. Abedin, M. G. R. Alam, N. H. Tran, and C. S.Hong, “Intelligent service fulfillment for software defined networks insmart city,” in . IEEE, 2018, pp. 516–521.[34] M. G. R. Alam, S. F. Abedin, S. I. Moon, A. Talukder, and C. S. Hong,“Healthcare iot-based affective state mining using a deep convolutionalneural network,”
IEEE Access , vol. 7, pp. 75 189–75 202, 2019.[35] R. Agrawal, T. Imieli´nski, and A. Swami, “Mining association rulesbetween sets of items in large databases,” in
Proceedings of the 1993ACM SIGMOD international conference on Management of data , 1993,pp. 207–216.[36] N. Friedman, D. Geiger, and M. Goldszmidt, “Bayesian network clas-sifiers,”
Machine learning , vol. 29, no. 2-3, pp. 131–163, 1997. [37] P. Porambage, J. Okwuibe, M. Liyanage, M. Ylianttila, and T. Taleb,“Survey on multi-access edge computing for internet of things realiza-tion,”
IEEE Communications Surveys & Tutorials , vol. 20, no. 4, pp.2961–2991, 2018.[38] S. Kekki, W. Featherstone, Y. Fang, P. Kuure, A. Li, A. Ranjan,D. Purkayastha, F. Jiangping, D. Frydman, G. Verin et al. , “Mec in5g networks,”
ETSI white paper , vol. 28, pp. 1–28, 2018.[39] M. S. Munir, S. F. Abedin, N. H. Tran, and C. S. Hong, “When edgecomputing meets microgrid: A deep reinforcement learning approach,”
IEEE Internet of Things Journal , vol. 6, no. 5, pp. 7360–7374, 2019.[40] M. G. R. Alam, M. S. Munir, M. Z. Uddin, M. S. Alam, T. N. Dang,and C. S. Hong, “Edge-of-things computing framework for cost-effectiveprovisioning of healthcare data,”
Journal of Parallel and DistributedComputing , vol. 123, pp. 54–60, 2019.[41] S. F. Abedin, M. G. R. Alam, R. Haw, and C. S. Hong, “A system modelfor energy efficient green-iot network,” in . IEEE, 2015, pp. 177–182.[42] M. G. R. Alam, R. Haw, S. S. Kim, M. A. K. Azad, S. F. Abedin,and C. S. Hong, “Em-psychiatry: an ambient intelligent system forpsychiatric emergency,”
IEEE Transactions on Industrial Informatics ,vol. 12, no. 6, pp. 2321–2330, 2016.[43] P. Kinney, P. Jamieson, and J. Guti´errez, “Ieee 802.15 wpantm task group4 (tg4),”
IEEE Task Group , 2006.[44] I. . W. Group et al. , “Std. 802.11 n-2009. amendment to ansi/ieee std.802.11, 2007 edition: Wireless lan medium access control (mac) andphysical layer (phy) specifications,” Technical report, ANSI/IEEE, Tech.Rep., 2007.[45] T. Taleb, K. Samdanis, B. Mada, H. Flinck, S. Dutta, and D. Sabella, “Onmulti-access edge computing: A survey of the emerging 5g network edgecloud architecture and orchestration,”
IEEE Communications Surveys &Tutorials , vol. 19, no. 3, pp. 1657–1681, 2017.[46] S. F. Abedin, M. G. R. Alam, S. A. Kazmi, N. H. Tran, D. Niyato,and C. S. Hong, “Resource allocation for ultra-reliable and enhancedmobile broadband iot applications in fog network,”
IEEE Transactionson Communications , vol. 67, no. 1, pp. 489–502, 2018.[47] A. K. Bairagi, S. F. Abedin, N. H. Tran, D. Niyato, and C. S. Hong,“Qoe-enabled unlicensed spectrum sharing in 5g: A game-theoreticapproach,”
IEEE Access , vol. 6, pp. 50 538–50 554, 2018.[48] M. S. Munir, S. F. Abedin, and C. S. Hong, “Artificial intelligence-based service aggregation for mobile-agent in edge computing,” in . IEEE, 2019, pp. 1–6.[49] M. Jaber, M. A. Imran, R. Tafazolli, and A. Tukmanov, “5g backhaulchallenges and emerging research directions: A survey,”
IEEE access ,vol. 4, pp. 1743–1766, 2016.[50] S. Farm, “State Farm Distracted Driver Detection,” https://kaggle.com/c/state-farm-distracted-driver-detection/data, 2016, [Online; accessedJuly-2017].[51] K. R. Scherer, V. Shuman, J. Fontaine, and C. Soriano Salinas, “The gridmeets the wheel: Assessing emotional feeling via self-report,” 2013.[52] M. M. Bradley and P. J. Lang, “Measuring emotion: the self-assessmentmanikin and the semantic differential,”
Journal of behavior therapy andexperimental psychiatry , vol. 25, no. 1, pp. 49–59, 1994.[53] H. White, “Maximum likelihood estimation of misspecified models,”
Econometrica: Journal of the Econometric Society , pp. 1–25, 1982.[54] S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing betweencapsules,” in
Advances in neural information processing systems , 2017,pp. 3856–3866.[55] bitalino, “(r)evolution board kit ble,” https://bitalino.com/en/board-kit-ble, [Online; accessed July-2020].[56] R. Gravina, P. Alinia, H. Ghasemzadeh, and G. Fortino, “Multi-sensorfusion in body sensor networks: State-of-the-art and research chal-lenges,”
Information Fusion , vol. 35, pp. 68–80, 2017.[57] G. Fortino, S. Galzarano, R. Gravina, and W. Li, “A framework forcollaborative computing and multi-sensor data fusion in body sensornetworks,”
Information Fusion , vol. 22, pp. 50–70, 2015.[58] C. Vera-Munoz, L. Pastor-Sanz, G. Fico, M. T. Arredondo, F. Benuzzi,and A. Blanco, “A wearable emg monitoring system for emotionsassessment,” in
Probing Experience . Springer, 2008, pp. 139–148.[59] J. Wijsman, B. Grundlehner, J. Penders, and H. Hermens, “Trapeziusmuscle emg as predictor of mental stress,” in
Wireless Health 2010 ,2010, pp. 155–163.[60] F. Agrafioti, D. Hatzinakos, and A. K. Anderson, “Ecg pattern analysisfor emotion detection,”
IEEE Transactions on affective computing ,vol. 3, no. 1, pp. 102–115, 2011. [61] T. Alotaiby, F. E. A. El-Samie, S. A. Alshebeili, and I. Ahmad, “A reviewof channel selection algorithms for eeg signal processing,” EURASIPJournal on Advances in Signal Processing , vol. 2015, no. 1, p. 66, 2015.[62] J. Posner, J. A. Russell, and B. S. Peterson, “The circumplex modelof affect: An integrative approach to affective neuroscience, cognitivedevelopment, and psychopathology,”
Development and psychopathology ,vol. 17, no. 3, pp. 715–734, 2005.[63] M. M. Bradley and P. J. Lang, “Affective reactions to acoustic stimuli,”
Psychophysiology , vol. 37, no. 2, pp. 204–215, 2000.[64] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi,T. Pun, A. Nijholt, and I. Patras, “Deap: A database for emotion analysis;using physiological signals,”
IEEE transactions on affective computing ,vol. 3, no. 1, pp. 18–31, 2011.[65] G. D. Forney, “The viterbi algorithm,”
Proceedings of the IEEE
NielsenNorman Group , 2014.
Md. Shirajum Munir (S’19) received the B.S.degree in computer science and engineering fromKhulna University, Khulna, Bangladesh, in 2010. Heis currently pursuing the Ph.D. degree in computerscience and engineering at Kyung Hee University,Seoul, South Korea. He served as a Lead Engineerwith the Solution Laboratory, Samsung Researchand Development Institute, Dhaka, Bangladesh, from2010 to 2016. His current research interests includeIoT network management, fog computing, mobileedge computing, software-defined networking, smartgrid, and machine learning.
Sarder Fakhrul Abedin (S’18) received his B.S.degree in Computer Science from Kristianstad Uni-versity, Kristianstad, Sweden, in 2013. He receivedhis Ph.D. degree in Computer Engineering fromKyung Hee University, South Korea in 2020. Heserved as a Postdoctoral Researcher at the Depart-ment of Computer Science and Engineering, KyungHee University, Korea. Currently, he is serving asa Postdoctoral Researcher at the Department ofInformation Systems and Technology,Mid SwedenUniversity, Sweden. His research interests includeInternet of Things (IoT) network management, Edge computing, Industrial5G, Machine learning, and Wireless networking.
Ki Tae Kim received the B.S. and M.S. degrees incomputer science and engineering from Kyung HeeUniversity, Seoul, South Korea, in 2017 and 2019,respectively, where he is currently pursuing thePh.D. degree in computer science and engineering.His research interests include SDN/NFV, wirelessnetworks, unmanned aerial vehicle communications,and machine learning.
Do Hyeon Kim received the B.S. degree in commu-nication engineering from Jeju National University,in 2014, and the M.S. degree from Kyung HeeUniversity, in 2017, where he is currently pursuingthe Ph.D. degree with the Department of Com-puter Science and Engineering. His research interestsinclude multiaccess edge computing and wirelessnetwork virtualization.
Md. Golam Rabiul Alam (S’15-M’17) receivedB.S. and M.S. degrees in Computer Science and En-gineering, and Information Technology respectively.He received Ph.D. in Computer Engineering fromKyung Hee University, South Korea in 2017. Heserved as a Post-doctoral researcher in ComputerScience and Engineering Department, Kyung HeeUniversity, Korea from March 2017 to February2018. He is currently an Associate Professor in theComputer Science and Engineering Department atBRAC University, Bangladesh. His research interestincludes healthcare informatics, mobile cloud and Edge computing, ambientintelligence, and persuasive technology. He is a member of IEEE IES, CES,CS, SPS, CIS, and ComSoc. He is also a member of the Korean Institute ofInformation Scientists and Engineers (KIISE) and received several best paperawards from prestigious conferences.