Detecting Compliance of Privacy Policies with Data Protection Laws
DDetecting Compliance of Privacy Policies with Data Protection Laws
Ayesha Qamar a , ∗ , Tehreem Javed a , ∗ and Mirza Omer Beg a a National University of Computer and Emerging Sciences, Islamabad
A R T I C L E I N F O
Keywords :GDPRPDPAPrivacy Policy AnalysisData Protection RegulationComplianceSemantic Similarity
A B S T R A C T
Privacy Policies are the legal documents that describe the practices that an organization or companyhas adopted in the handling of personal data of its users. But as policies are a legal document, they areoften written in extensive legal jargon that is difficult to understand. Though work has been done onprivacy policies but none that caters to the problem of verifying if a given privacy policy adheres to thedata protection laws of a given country or state. We aim to bridge that gap by providing a frameworkthat will analyse privacy policies in light of various data protection laws, such as the General DataProtection Regulation (GDPR). To achieve that, firstly we labelled both the privacy policies and laws.Then a correlation scheme is developed to map the contents of a privacy policy to the appropriatesegments of law that a policy must conform to. Then we check the compliance of privacy policys’text with the corresponding text of the law using NLP techniques. By using such a tool, users wouldbe better equipped to understand how their personal data is managed. For now, we have provideda mapping for the GDPR and PDPA, but other laws can easily be incorporated in the already builtpipeline.
1. Introduction
In recent times, in the field of Natural Language Processing(or computer laws and policies?), a lot of work is being car-ried out to analyze[16][5][12][14], understand[50] and bet-ter represent[12] privacy policies, none of the work targets torelate privacy policies with data protection laws. The anal-ysis of privacy policies on their own is not enough. Thereneeds to be a mechanism to relate those policies with laws.The policies dictate what an application or software is doingwith the user’s data but that information alone is not ade-quate to judge a policys’ transparency and its usefulness [4].A possible solution is to create a system powered by ma-chine learning to review the privacy policy and see if it is inaccordance to the laws of the country (or countries) and iden-tify any areas where a violation between them is detected.Using an automation tool, a user can have a deeper under-standing of what is happening with their data in legal light.The automation of checking compliance of privacy poli-cies with laws can be of great value. It will arm users tounderstand policies with respect to laws without getting intothe apprehension of legal jargon and details.Privacy policies and data protection laws regulating thesepolicies are both highly extensive and full of legal jargon.In fact, it is estimated that about 201 hours on average areneeded by any average user just to read all the privacy poli-cies encountered in a year[1]. As a result, consumers don’tfully understand what they are signing up for[2] and oftendo not know whether the policies that they are agreeing toare infringing on their legal rights.
Abbreviations . NLP, natural language processing; GDPR, GeneralData Protection Regulation; PDPA, Personal Data Protection Act; ∗ Corresponding authors: Tehreem Javed and Ayesha Qamar, Depart-ment of Computer Science, National University of Computer and EmergingSciences, IslamabadEmail addresses: [email protected] (T. Javed), [email protected] (A. Qamar), [email protected] (M. Beg)
ORCID (s):
Moreover, a company’s legal department spends hoursto review its privacy policy to see if it is compatible witha given country’s laws. This is a rigorous process becauseeach country has its own data protection laws and also be-cause with the upsurge of Internet of Things there has beenan escalation in the number and complexity of privacy poli-cies themselves[3].Hallinan et al[15] concluded through surveys that the Eu-ropean population at large remains skeptical now how theirdata is processed, any knowledge that the public has aboutdata protection is superficial. In this technological era, users’understanding of how their data is processed is crucial forthem to make informed decisions but users either don’t havethe basic understanding of their legal rights or not enoughtime to stay informed with the latest changes. This calls fora way to let users understand what they are signing up forwithout having to..
2. Related work
In 2016, Wilson et al[5] introduced a taxonomy for privacypolicies, OPP-115, and made this corpus of 115 annotatedpolicies publicly available. Since then, much work has beendone to understand various aspects of privacy policies[12][50]. Sarne et al[13] presented how using an unsupervisedtechnique, Latent Dirichlet Allocation(LDA)[55], can alsoprovide a taxonomy for privacy policies that is much morefine-grained. LDA doesn’t require the data to be pre-labelled.It works by randomly grouping words together into topicsand then iteratively improving the grouping till convergence.They also showed that the taxonomy obtained had a sub-stantial overlap with that of OOP-115. The research alsoprovides insight into the topics that are being addressed inprivacy policies these days.Apart from that, Hidden markov models[14] have beenused previously to categorize privacy policies in an unsuper-vised way. The policies are segmented based on their sectionheadings by crowdworkers. A Hidden Markov Model likeapproach is then used to align the segments such that an is-
A Qamar et al.:
Preprint submitted to Elsevier
Page 1 of 7 a r X i v : . [ c s . CR ] F e b etecting Privacy Policy Compliance with Data Protection Laws sue (addressed in the policy) corresponds to a hidden state.This correspondence is based on the bigrams in the segmentof the policy and its distribution of words.Tesfay et al[54] presented an approach to summarize longprivacy policies using Machine Learning and then check againstGDPR aspects as a criteria. Work has also been done to vi-sually represent policies, for that Harkous et al[12] devel-oped a framework using Deep Learning techniques and thepower of Convolutional Neural Networks to analyse policieson a finer level and developed a hierarchy to organise theinformation in privacy policies. They then presented the in-formation in policies in a visual format and also provideda question answering interface where users’ queries about aprivacy policy are answered.Recently, Zimmeck et al[50] compared the actual prac-tice of a million apps with those stated in their privacy poli-cies and flagged any discrepancies as compliance issues.Renaud K. [23] analysis General Data Protection Reg-ulation to find six requirements that a privacy policy com-pliant of GDPR must have. He provided a GDPR compliantprivacy policy template for policy makers to use.While work has been done to categorize, summarize andvisualize privacy policies, none of the work has yet provideda universal method to check the privacy policies’ compliancewith the very laws that regulate them.
3. Methodology
We propose a system which, given a privacy policy checksits compliance with a data protection law. For this, we firstlabelled privacy policies. Data protection laws were alsosegmented and labelled. Finally, we checked for the com-pliance of the resulting chunks of policies with those of thelaw. The details are mentioned in the following sections.
We have labelled policies based on the taxonomy pro-vided by Wilson et al[5]. The taxonomy is based on a hi-erarchy of labelling and consists of 10 broad categories and112 fine grained categories. The policies are segmented atparagraph level and each segment gets assigned multiple la-bels. The annotations were done by 3 graduate law students;there are three versions of the annotations. We have usedthe annotations in which there is a 0.75 overlap between theannotations, i.e., at least 2 of the 3 students have given thesame label.To begin with, we extracted the 115 annotated policiesfrom the OPP-115[5] dataset, and only relevant informationlike the text segment of policies themselves and the assignedlabels were kept. Then to cater for these multi class labels,we made binary models for classification for the 10 broadcategories. Thus, for training the classifiers the dataset wasdivided into ten subsets where each set corresponded to onecategory and had binary labels (0 if the text segment did notbelong to the category and 1 otherwise). Figure 1:
The 8 high level categories in the OPP-115 dataset.The other two categories not shown are other and
Do NotTrack . The latter category is not useful as it is no longermentioned in policies.
Then for the classification, we used Towards AutomaticClassification of Policy Text[53] as a starting point and traineda Logistic Regression model and a Support Vector Machinemodel for classification. In addition, we also used a finetuned version of the BERT[51] model. We took the pre-trained BERT model for classification and fine tuned it usinga low learning rate for each policy category. We trained clas-sifiers for all the ten datasets and calculated their F1 scores.We tested the three models for each category on a heldout test set and calculated their F1 scores. The BERT Clas-sifier gave better results than others, so we saved the trainedmodel to use for privacy policy categorization at run time inthe final product.
We have worked with two laws i.e., the General Data Pro-tection Regulation-GDPR and the Personal Data ProtectionAct-PDPA. The first step to labelling the laws is to segmentthem.
For the GDPR, we followed the natural hierarchy in whichit is written and segmented it according to the Articles, withone segment consisting of all the subpoints of an Article. Byfollowing this segmentation scheme we were left with 371segments with an average word count of 75.11 words persegment. After that, we removed stopwords, punctuationsand lemmatized the words.
Figure 2:
The structure of the GDPR. The hierarchy consists ofChapters, Sections, Articles and then points in those Articles.
We decided to use topic modelling for grouping togethersimilar segments and thus creating a taxonomy of the law.We used Latent Dirichlet Allocation (LDA)[55] to achievethat. The decision to use LDA was based on the promisingresults achieved by[13] to label privacy policies. LDA worksby assuming that topics in a document and words in a topic
A Qamar et al.:
Preprint submitted to Elsevier
Page 2 of 7etecting Privacy Policy Compliance with Data Protection Laws follow some specific distribution. Since it’s an unsupervisedtechnique, we only need to provide the number of topics, k ,the document has. Since it’s a hyperparameter, we experi-mented with several values of k and found that setting it to10 gave the most optimal results in our case. Figure 3:
The perplexity score plotted against multiple valuesof the number of topics.
Perplexity scores(the lower, the better) did not give anyinsightful information to decide the value of k . Therefore, weused the coherence score(a measure to evaluate topic mod-els) as the deciding factor instead. The best coherence valuewas achieved when k was set to 5. But such a coarse la-belling would not have served our purpose, since we knowthat the GDPR contains at least 10 different topics as thoseare the number of different chapters, so we went with thenext favourable value of 15. Figure 4:
The number of segments belonging to each topic isdepicted when k is set to 15. As shown, some of the topicsonly have one or two segments assigned to them. But setting the number of topics to 15 gave rise to a fewtopics containing only one or two segments only and merg-ing them seemed to be a sensible option. So in the end wedecided to keep the number of topics to 10.Figure 6 shows the most occurring words in four of thetopics. Most of the words are non-overlapping i.e., do not
Figure 5:
The coherence scores plotted against the number oftopics k . Figure 6:
A visual representation of the most frequent wordsof some of the topics. occur in multiple topics and hence show that the labelling isefficient.
The PDPA has two main provisions: ∙ Data Protection (DP) provisions.
These provisions are di-rectly concerned with the handling and collection ofusers’ personal data.. ∙ Do Not Call (DNC) provisions.
Do Not Call Registry isnot applicable to privacy policies as that part is onlyconcerned with how to handle the telephone numbersof Singaporean users but does not in particular detailhow the phone number should be collected. Includ-ing video or voice calls or text messages. But sincethese requirements aren’t directly linked with privacypolicies, we skipped those divisions.
A Qamar et al.:
Preprint submitted to Elsevier
Page 3 of 7etecting Privacy Policy Compliance with Data Protection Laws
For the PDPA, we did manual annotation. As the PDPAis a relatively shorter law, we did not feel the need to label itthrough any unsupervised method to obtain the segment cat-egorizations. Upon manual reading only PART II to PARTV of the law were relevant to personal data and we extractedappropriate text from these parts.Titles of the parts, from where law text was extracted,are mentioned below [22]: ∙ PART II : PERSONAL DATA PROTECTION COMMIS-SION AND ADMINISTRATION ∙ PART III : GENERAL RULES WITH RESPECT TO PRO-TECTION OF PERSONAL DATA ∙ PART IV : COLLECTION, USE AND DISCLOSURE OFPERSONAL DATA ∙ PART IV : PART V: ACCESS TO AND CORRECTIONOF PERSONAL DATA
Leveraging the work done by Karen et al[57], where theyprovide a template and lay out the requirements that policiesmust follow in order to be GDPR compliant. The GDPRrequirements that customers must be informed about are: ∙ GDPR1 : What Data will be Collected and Why ∙ GDPR2 : How Data Will Be Processed ∙ GDPR3 : How Long Data Will Be Retained ∙ GDPR4 : Who Can Be Contacted to Have Data Removedor Produced
Next we were left with the task of manually extractingall the text from the GDPR that pertained to the specific cat-egory of our interest. Because the law was already catego-rized using LDA, this step became easier. Only some portionof the law was useful for our purpose i.e., the articles relatedto personal data processing and not the chapters about howthe law itself should be implemented or where it is applicablesuch as the Chapter X: DELEGATED ACTS AND IMPLE-MENTING ACTS.For example, the Article 14 of GDPR on Informationto be provided where personal data have not been obtainedfrom the data subject [21]??, this was categorized as a GDPRsegment belonging to “What Data will be Collected and Why”and so will be mapped to the First Party and Third Party cate-gory of privacy policy. In total, ten such law segments weremade and given one of the four above mentioned require-ments. PERSONAL DATA PROTECTION ACT 2012. Retrieved June 10,2020 from https://sso.agc.gov.sg/Act/PDPA2012 We merged two categories from the paper into one. GDPR Article 14(1)(d,e):“Where personal data have not been ob-tained from the data subject, the controller shall provide the data subjectwith the following information: the categories of personal data concernedthe recipients or categories of recipients of the personal data where appli-cable”
According to the Personal Data Protection Commission(PDPC), there are three broader categories and then furthersubcategories of the obligations set by PDPA regarding pro-tection of personal data [23]. We extracted only those sub-categories that applied to privacy policies. The categorieswith a brief description are stated below:• Collection, use and disclosure of personal data–
Consent
An organization should first ask cus-tomers to give consent to collect, use or disclosetheir personal data. Users should also have theability to withdraw consent. – Purpose and Notification
Consent should onlybe taken for data that is essential to provide agiven service to users. Users’ data can only beobtained or disclosed for the purposes for whichthe user was informed about. Users should alsobe made aware of the reasons for data collection.•
Accountability to individuals–
Access and Correction
If customers request, theyshould be provided with their collected personaldata along with the ways in which the data wascollected and used in the time frame of a year.Users can also request to get their data updatedto fix any errors.•
Care of personal data–
Retention
Data should be deleted once the pur-pose it was collected for has been fulfilled. Keep-ing data longer than needed for business reasonsis prohibited.
When a new policy is entered to check for compliance,first it is segmented and then each segment gets labelled oneor more of the 10 labels. At this stage, we have categorizedpolicy segments(entered at runtime) and have the already la-belled law segments. Next, we need to relate each of the pol-icy segments with one or more of the law segments
General Data Protection Regulation
For GDPR, we take the labelled segments of policy andmap the five privacy policy categories to the four GDPR re-quirement categories.
Personal Data Protection Act
The policy categories are mapped to PDPA based on theguideline provided by the PDPC A Qamar et al.:
Preprint submitted to Elsevier
Page 4 of 7etecting Privacy Policy Compliance with Data Protection Laws
Figure 7:
The GDPR law segments categories represented bythe outer box and the policy categories linked with each.
Figure 8:
The PDPA law categories represented by the outerbox and the policy categories written in purple.
After allocating categories to segments of laws and poli-cies, we find similarity between segments of the laws andpolicies which fall under the same category. This similarityis used as a measure to decide if the policy is in compliancewith the law. We used BERT [51][17] word embeddings andUniversal Sentence encoding[18] to find the similarity.Word embeddings such as word2vec and Glove have beenuseful in improving accuracy across NLP tasks. BERT wordembeddings improve upon these methods because it is thefirst unsupervised, deeply bidirectional system for pre-trainingNLP. Context-free models such as word2vec or GloVe gener-ate a single "word embedding" representation for each wordin the vocabulary, so bank would have the same represen-tation in bank deposit and river bank. We use pre trainedBERT uncased model to get sentence embeddings by com-bining word embeddings through mean across layers of words.These embeddings are then used to find similarity betweenpairs of policy and law segment using cosine similarity andeuclidean distance.By using Universal Sentence Encoding[52], we obtainedsentence embeddings and then used cosine similarity. Thereare two model architectures present, one uses transformer ar-chitecture and gets higher quality embeddings while requir-ing greater resources and computing power and the secondone uses less resources but at the cost of slightly lesser ac- curacy. We went with the latter one to utilize resources opti-mally. The architecture we used is the Deep Averaging Net-work (DAN), first word embeddings along with bi-grams areaveraged and then used as input to feedforward deep neuralnetwork (DNN) to get sentence embeddings of 512 dimen-sions.
4. Compliance Score
Using the similarity score between the policy segments andthe law segments which are related to each other, as the start-ing point we calculate the compliance score using the for-mula shown below:
𝐶𝑜𝑚𝑝𝑙𝑖𝑎𝑛𝑐𝑒 = 𝑀𝑎𝑥 − 𝑆𝑐𝑜𝑟𝑒𝑀𝑎𝑥 − 𝑀𝑖𝑛
As we don’t have a labelled dataset for compliance scorebetween law and policy segment, we created a small set tofind the required compliance thresholds(max and min) .Todecide on where to set the threshold for compliance and non-compliance from the cosine similarity score obtained fromUniversal Sentence Encodings of policy and law segments,we created a dummy dataset. The dataset consists of a lawsegment, for both PDPA and GDPR, and policy segmentalong with a score from 0 to 5; 1 being the least compliant,5 being completely compliant and 0 showing absolute irrel-evance between texts. Then the problem simply reduced toidentifying the correct value of thresholds to turn the simi-larity score into compliance score.For the GDPR, the threshold was found to be max 0.6and minimum 0.25, that is, when a policy segment was incomplete compliance of a law segment the similarity scorewas 0.6 and when it had zero compliance the score was 0.25.Using these thresholds, we find the compliance score for thefour GDPR requirements; what data will be collected andwhy, how data will be processed, how long it will be retainedand who can be contacted to have data removed or produced.For the PDPA, the thresholds that gave the optimal re-sults were max .09 for total compliance and min .21 for ½compliance, with the compliance decrementing as the scoreincreased towards .50.
5. Experimental Evaluation ∙ Labelling Policies:
To evaluate the labelling of privacypolicies we used a held out dataset and checked the ac-curacy of our models(SVM, LR, BERT) on that data.The complete results can be seen in figure 9. BERTgave a better F1 score for most categories. ∙ Labelling Laws:
For laws, we are going to have an expertverify the labelling and annotation since there is nolabelled dataset of data protection laws available. ∙ Finding Similarity:
Due to the unavailability of policy andlaw compliance dataset, we evaluate our similarity modelby using it on the semantic textual similarity develop-ment dataset. The STS dataset comprises of sentence
A Qamar et al.:
Preprint submitted to Elsevier
Page 5 of 7etecting Privacy Policy Compliance with Data Protection Laws
Table 1
The pearson correlation obtained on Universal Sentence En-coding outperforms BERT models.
Model Pearson Correlation
BERT with Cosine Similarity 0.55BERT with euclidean distance 0.57Universal Sentence Encoder 0.76
Table 2
The F1 score of the Logistic Regression, Support Vector Ma-chine and BERT across all the categories.
Categories
LR SVM BERTFirst Party Collection/Use 0.79 0.81 0.85Third Party Sharing/Collection 0.77 0.78 0.87User Choice/Control 0.68 0.70 0.73User Access, Edit and Deletion 0.81 0.82 0.66Data Retention 0.43 0.40 0.62Data Security 0.73 0.77 0.77 pairs from news, captions, and forums genre. Thesesentence pairs are labelled for similarity on a scaleof 0 to 5 where 5 means complete similarity and 0means no similarity at all. The Pearson Correlationobtained by using BERT embedding and taking meanof all word vectors and sum of all vectors as well asthe correlation obtained by using Universal Sentenceencoding is shown in figure 8. ∙ Test Case:
We tested our system by using the nestle pri-vacy policy. This policy contains a clause about dataretention which states “Nestlé will only retain yourpersonal data for as long as it is necessary for the statedpurpose, taking into account also our need to answerqueries or resolve problems, provide improved and newservices, and comply with legal requirements underapplicable laws.This means that we may retain yourpersonal data for a reasonable period after your lastinteraction with us. When the personal data that wecollect is no longer required in this way, we destroy ordelete it in a secure manner.”We first run the privacy policy against GDPR as itis and the system gives a 99.7 percent data retentioncompliance score as it should. Then we replace thissection with “Nestle will store the data for as longas we want”. When this altered policy is run againstGDPR, the compliance report gives a score of aroundzero percent.
6. Conclusion and Further Work
Our work proves that automated compliance check with re-gard to legal documents gives plausible results. This opensthe possibility of using such techniques to find legal com-pliance in contracts etc. Further work can be done in this area by adding further data protection laws such as Canada’sPIPEDA and US’ Privacy Shield. Another improvement thatcan be done is to try more complex architectures and modelsto correlate laws and policies.
References , pages 1–6. IEEE, 2017.[18] T. Anwar and O. Baig. Tac at semeval-2020 task 12: Ensembling ap-proach for multilingual offensive language identification in social me-dia. In
Proceedings of the Fourteenth Workshop on Semantic Evalua-tion , pages 2177–2182, 2020.[19] M. U. Arshad, M. F. Bashir, A. Majeed, W. Shahzad, and M. O. Beg.
A Qamar et al.:
Preprint submitted to Elsevier
Page 6 of 7etecting Privacy Policy Compliance with Data Protection Laws
Corpus for emotion detection on roman urdu. In , pages 1–6. IEEE, 2019.[20] M. Asad, M. Asim, T. Javed, M. O. Beg, H. Mujtaba, and S. Abbas.Deepdetect: detection of distributed denial of service attacks using deeplearning.
The Computer Journal , 63(7):983–994, 2020.[21] M. N. Awan and M. O. Beg. Top-rank: a topicalpostionrank for ex-traction and classification of keyphrases in text.
Computer Speech &Language , 65:101–116, 2021.[22] A. A. Bangash, H. Sahar, and M. O. Beg. A methodology for relatingsoftware structure with energy consumption. In , pages 111–120. IEEE, 2017.[23] M. Beg. Critical path heuristic for automatic parallelization.
Univer-sity of Waterloo, David R. Cheriton School of Computer Science, Tech-nical Report CS-2008-16 , 2008.[24] M. Beg and P. v. Beek. A constraint programming approach forintegrated spatial and temporal scheduling for clustered architectures.
ACM Transactions on Embedded Computing Systems (TECS) , 13(1):1–23, 2013.[25] M. Beg and M. Dahlin. A memory accounting interface for the javaprogramming language.
Technical Report CS-TR-01–40, University ofTexas at Austin , 2001.[26] M. Beg and P. Van Beek. A graph theoretic approach to cache-conscious placement of data for direct mapped caches. In
Proceedingsof the 2010 international symposium on Memory management , pages113–120, 2010.[27] M. Beg and P. Van Beek. A constraint programming approach forinstruction assignment. In , pages 25–34. IEEE, 2011.[28] M. O. Beg. Combinatorial problems in compiler optimization. 2013.[29] M. O. Beg, M. N. Awan, and S. S. Ali. Algorithmic machine learningfor prediction of stock prices. In
FinTech as a Disruptive Technology forFinancial Institutions , pages 142–169. IGI Global, 2019.[30] N. Dilawar, H. Majeed, M. O. Beg, N. Ejaz, K. Muhammad,I. Mehmood, and Y. Nam. Understanding citizen issues through reviews:A step towards data informed planning in smart cities.
Applied Sciences ,8(9):1589, 2018.[31] M. U. Farooq, M. O. Beg, et al. Bigdata analysis of stack overflowfor energy consumption of android framework. In , pages 1–9. IEEE, 2019.[32] M. U. Farooq, S. U. R. Khan, and M. O. Beg. Melta: A method levelenergy estimation technique for android development. In , pages 1–10. IEEE,2019.[33] A. R. Javed, M. O. Beg, M. Asim, T. Baker, and A. H. Al-Bayatti.Alphalogger: Detecting motion-based side-channel attack using smart-phone keystrokes.
Journal of Ambient Intelligence and Humanized Com-puting , pages 1–14, 2020.[34] A. R. Javed, M. U. Sarwar, M. O. Beg, M. Asim, T. Baker, and H. Taw-fik. A collaborative healthcare framework for shared healthcare planwith ambient intelligence.
Human-centric Computing and InformationSciences , 10(1):1–21, 2020.[35] H. T. Javed, M. O. Beg, H. Mujtaba, H. Majeed, and M. Asim. Fair-ness in real-time energy pricing for smart grid using unsupervised learn-ing.
The Computer Journal , 62(3):414–429, 2019.[36] M. Karsten, S. Keshav, S. Prasad, and M. Beg. An axiomatic basis forcommunication.
ACM SIGCOMM Computer Communication Review ,37(4):217–228, 2007.[37] H. S. Khawaja, M. O. Beg, and S. Qamar. Domain specific emotionlexicon expansion. In , pages 1–5. IEEE, 2018.[38] A. Majeed, H. Mujtaba, and M. O. Beg. Emotion detection in romanurdu text using machine learning. In
Proceedings of the 35th IEEE/ACMInternational Conference on Automated Software Engineering Work-shops , pages 125–130, 2020.[39] B. Naeem, A. Khan, M. O. Beg, and H. Mujtaba. A deep learningframework for clickbait detection on social area network using naturallanguage cues.
Journal of Computational Social Science , pages 1–13, 2020.[40] S. Naeem, M. Iqbal, M. Saqib, M. Saad, M. S. Raza, Z. Ali, N. Akhtar,M. O. Beg, W. Shahzad, and M. U. Arshad. Subspace gaussian mix-ture model for continuous urdu speech recognition using kaldi. In , pages 1–7. IEEE, 2020.[41] S. Qamar, H. Mujtaba, H. Majeed, and M. O. Beg. Relationship iden-tification between conversational agents using emotion analysis.
Cogni-tive Computation , pages 1–15, 2021.[42] H. Sahar, A. A. Bangash, and M. O. Beg. Towards energy awareobject-oriented development of android applications.
Sustainable Com-puting: Informatics and Systems , 21:28–46, 2019.[43] M. Tariq, H. Majeed, M. O. Beg, F. A. Khan, and A. Derhab. Accuratedetection of sitting posture activities in a secure iot based assisted livingenvironment.
Future Generation Computer Systems , 92:745–757, 2019.[44] A. Uzair, M. O. Beg, H. Mujtaba, and H. Majeed. Weec: Web en-ergy efficient computing: A machine learning approach.
SustainableComputing: Informatics and Systems , 22:230–243, 2019.[45] A. Zafar, H. Mujtaba, M. O. Beg, and S. Ali. Deceptive level gener-ator. In
AIIDE Workshops , 2018.[46] A. Zafar, H. Mujtaba, S. Ashiq, and M. O. Beg. A constructive ap-proach for general video game level generation. In , pages 102–107. IEEE,2019.[47] A. Zafar, H. Mujtaba, M. T. Baig, and M. O. Beg. Using patterns asobjectives for general video game level generation.
ICGA Journal , 41(2):66–77, 2019.[48] A. Zafar, H. Mujtaba, and M. O. Beg. Search-based procedural con-tent generation for gvg-lg.
Applied Soft Computing
A Qamar et al.: