A. A. Zaidan
Information Technology University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A. A. Zaidan.
Journal of Medicinal Plants Research | 2010
Hamdan O. Alanazi; Hamid A. Jalab; Gazi Mahabubul Alam; B. B. Zaidan; A. A. Zaidan
Nowadays, health care is one of the most important subjects in life. In USA, 100 billion dollars will be spent in the next 10 years, according to experts. The Electronic Medical Record (EMR) is usually a computerized legal medical record created in an organization that delivers care, such as a hospital and doctors surgery. In the age of technology, one of the most important factors for EMR is that it secures the records for the patients, protects their rights and is responsible for the disclosure of their data. An overview of this study has presented the importance of the privacy of EMR and the patients’ rights. In addition, cryptography algorithms and security requirements have been discussed and the paper has also discussed different architecture, designs and systems that have been reported in the literature. In a nutshell, most of these systems are poor in terms of achieving the security requirements, while on the other side, most of the systems have not discussed the patient rights and how the system can detect the person who broadcast these records. n n xa0 n n Key words:xa0Electronic medical record, information security, data privacy, rights of patient and cryptography algorithms.
International Journal of Computer and Electrical Engineering | 2009
Alaa Y. Taqa; A. A. Zaidan; B. B. Zaidan
Abstract- Steganography is the art of information hiding and invisible communication. Unlike cryptography, where the goal is to secure communications from the Snooper by make the data not understood. In this framework we will propose a collaborate approach between steganography and cryptography. This approach will invent high rate and high secure data hidden using secret key steganography and AES Rijndael method. As well, this paper will overview the use of data hiding techniques and its classification, furthermore we will assign the well-built of the AES algorithm, during this review the author will answer the question why they used AES algorithm. In additional to the security issues we will use the digital video as a cover to the data hidden. The reason behind opt the video cover in this approach is the huge amount of single frames image per sec Which in turn overcome the problem of the data hiding quantity, as the experiment result shows the success of the hidden, encryption, extract, decryption functions without affecting the quality of the video
International Journal of Computer and Electrical Engineering | 2009
B. B. Zaidan; A. A. Zaidan; Alaa Y. Taqa; Fazida Othman
Steganography is the idea of hiding private, sensitive data or information within something that appears to be nothing but normal, in this article the author invent comprehensive study on this stego-image; in fact, there are some factors discussed experimentally such as steganography classification, applied algorithms, Stego-image, the impact of data hidden on the image texture, in the other hand the author named the most commend methods used by the attackers to against the data hidden in image. As it shown below there are three illustrious technique used, sequentially, statistical technique, try and error technique and finally histogram technique; theses techniques has been discussed in details and evidenced by some experiment result
International Journal of Information Technology and Decision Making | 2017
B. B. Zaidan; A. A. Zaidan; H. Abdul Karim; N. N. Ahmad
This paper presents a new approach based on multi-dimensional evaluation and benchmarking for data hiding techniques, i.e., watermarking and steganography. The novelty claim is the use of evaluation matrix (EM) for performance evaluation of data hiding techniques; however, one major problem with performance evaluation of data hiding techniques is to find reasonable thresholds for performance metrics and the trade-off among them in different data hiding applications. Two experiments are conducted. The first experiment included LSB techniques (eight approaches) based on different payload results and the noise gate approach; a total of nine approaches were used. Five audio samples with different audio styles are tested using each of the nine approaches and considering three evaluation criteria, namely, complexity, payload, and quality, to generate watermarked samples. The second experiment involves the use of various decision-making techniques simple additive weighting (SAW), multiplicative exponential weighting (MEW), hierarchical adaptive weighting (HAW), technique for order of preference by similarity to ideal solution (TOPSIS), weighted sum model (WSM) and weighted product method (WPM) to benchmark the results of the first experiment. Mean, standard deviation (STD), and paired sample t-test are then performed to compare the correlations among different techniques on the basis of ranking results. The findings are as follows: (1) A statistically significant difference is observed among the ranking results of each multi-criteria decision-making (MCDM) technique, (2) TOPSIS-Euclidean is the best technique to solve the benchmarking problem among digital watermarking techniques. (3) Among the decision-making techniques, WSM has the lowest rank in terms of solving the benchmarking problem. (4) Under different circumstances, the noise gate watermarking approach performs better than LSB algorithms.
International Journal of Pattern Recognition and Artificial Intelligence | 2014
A. A. Zaidan; H. Abdul Karim; N. N. Ahmad; B. B. Zaidan; Aduwati Sali
Pornographic images are disturbing and malicious contents that are easily available through Internet technology. It has a negative and lasting effect on children who use the Internet; thus, pornography has become a serious threat not only to Internet users but also to society at large. Therefore, developing efficient and reliable tools to automatically filter pornographic contents is imperative. However, the effective interception of pornography remains a challenging issue. In this paper, a four-phase anti-pornography system based on the neural and Bayesian methods of artificial intelligence is proposed. Primitive information on pornography is examined and then used to determine if a given image falls under the pornography category. First, we present a detailed description of preliminary study phase followed by the modeling phase for the proposed skin detector. An anti-pornography system is created in the development phase, which also includes the proposed pornography classifier based on skin detection. Finally, the performance assessment method for the proposed anti-pornography system is discussed in the evaluation phase.
International Journal of Pattern Recognition and Artificial Intelligence | 2013
A. A. Zaidan; H. Abdul Karim; N. N. Ahmad; B. B. Zaidan; Aduwati Sali
Unprecedented advances in Internet technologies with multimedia capabilities have enabled pornography and adult content to be widely and freely distributed as easy as a click of a mouse through various means such as YouTube, Facebook, and Tags. Protecting children from unnecessary exposure to adult content has, therefore, become a serious problem in the real world. In particular, the considerable perversion in pornography and the exposure of children and the society to such perversions leads to moral decay. Constructing an appropriate filter for pornographic images is a major concern in modern society; however, this area poses challenges. This study aims to shed light on a content-based technique that employs an anti-pornography machine and to encourage researchers to study this adult image filtering technique. In this study, we discuss models of skin detection and their advantages and disadvantages in real life. We also elaborate on the pornographic image classifier using a feature extraction process and its classification process, along with the possible difficulties it may present. This study also analyzes anti-pornography techniques based on skin detection and discusses their strengths and weaknesses.
Journal of Circuits, Systems, and Computers | 2015
A. A. Zaidan; H. Abdul Karim; N. N. Ahmad; B. B. Zaidan; Miss Laiha Mat Kiah
This study proposed a pornography classifier using multi-agent learning as a combination of the Bayesian method using color features extracted from skin detection based on the YCbCr color space and the back-propagation neural network method using shape features also extracted from skin detection. The classification of pornographic images was made more robust to the variation of images despite size engineering problems. Previous studies failed to achieve such robustness. Findings showed that the proposed multi-agent learning-based pornography classifier has produced significant TP and TN average rates (i.e., 96% and 97.33%, respectively). In addition, the proposed classifier has achieved a significantly low average rate of FN and FP (i.e., only 4% and 2.67%, respectively). The implementation of this algorithm is crucial and significant not only in identifying pornography but also in blocking Web sites that covertly promote pornography.
African Journal of Business Management | 2011
A. A. Zaidan; Nikhat Ahmed; H. Abdul Karim; Gazi Mahabubul Alam; B. B. Zaidan
Spam isxa0unsolicited bulk messages sent indiscriminately. According to Wikipedia and Cisco report, more than 31xa0trillion spams have been sent in 2009.xa0These spam or “junk mails” can involve various kinds of messages such as commercial advertising, pornography, viruses, doubtful product, get rich quick scheme or quasi legal services. In this paper, a direct attention has been paid to the text spam, and in particular, the process of text spam and the tricks of the spammers have been described in this paper. Moreover, the author described the implementation of the text content analysis and classification, using different document processing techniques (that is, stop words, short words form, regular expression, stemming etc.) and naive Bayesian classifier. In addition to that, the author has depicted the practical work of the document processing and naive Bayesian classifier towards implementing an accurate anti-spam system. n n xa0 n n Key words:xa0Text spam, stop words, short words form, regular expression, stemming, document processing, naive Bayesian classifier.
international conference on future computer and communication | 2009
Mohamed Elsadig Eltahir; Laiha Mat Kiah; B. B. Zaidan; A. A. Zaidan
Steganography is the idea of hiding private or sensitive data or information within something that appears to be nothing out of the ordinary.In this paper we will overview the use of data hiding techniques in digital video as still images. We will describe how we can use the Least Significant Bit insertion (LSB) method on video images or frames, in addition to the usage of the human vision system to increase the size of the data embedded in digital video streaming.
Sensors | 2018
Mohamed Aktham Ahmed; B. B. Zaidan; A. A. Zaidan; Mahmood Maher Salih; Muhammad Modi bin Lakulu
Loss of the ability to speak or hear exerts psychological and social impacts on the affected persons due to the lack of proper communication. Multiple and systematic scholarly interventions that vary according to context have been implemented to overcome disability-related difficulties. Sign language recognition (SLR) systems based on sensory gloves are significant innovations that aim to procure data on the shape or movement of the human hand. Innovative technology for this matter is mainly restricted and dispersed. The available trends and gaps should be explored in this research approach to provide valuable insights into technological environments. Thus, a review is conducted to create a coherent taxonomy to describe the latest research divided into four main categories: development, framework, other hand gesture recognition, and reviews and surveys. Then, we conduct analyses of the glove systems for SLR device characteristics, develop a roadmap for technology evolution, discuss its limitations, and provide valuable insights into technological environments. This will help researchers to understand the current options and gaps in this area, thus contributing to this line of research.