Sara Tedmori
Princess Sumaya University for Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sara Tedmori.
Information Sciences | 2014
Sara Tedmori; Nijad Al-Najdawi
Lossless encryption methods are more applicable than lossy encryption methods when marginal distortion is not tolerable. In this research, the authors propose a novel lossless symmetric key encryption/decryption technique. In the proposed algorithm, the image is transformed into the frequency domain using the Haar wavelet transform, then the image sub-bands are encrypted in a such way that guarantees a secure, reliable, and an unbreakable form. The encryption involves scattering the distinguishable frequency data in the image using a reversible weighting factor amongst the rest of the frequencies. The algorithm is designed to shuffle and reverse the sign of each frequency in the transformed image before the image frequencies are transformed back to the pixel domain. The results show a total deviation in pixel values between the original and encrypted image. The decryption algorithm reverses the encryption process and restores the image to its original form. The proposed algorithm is evaluated using standard security and statistical methods; results show that the proposed work is resistant to most known attacks and more secure than other algorithms in the cryptography domain.
Applied Soft Computing | 2015
Nijad Al-Najdawi; Mariam Biltawi; Sara Tedmori
Reveal the optimal combination of various enhancement methods.Segment breast region in order to obtain better visual interpretation.To assist radiologists in making accurate decisions, analysis and classifications.Tumor classification accuracy and sensitivity values of 81.1% and 86%, respectively.Participated radiologists are pleased with the results and acknowledged the work. Mammography is the most effective technique for breast cancer screening and detection of abnormalities. However, early detection of breast cancer is dependent on both the radiologists ability to read mammograms and the quality of mammogram images. In this paper, the researchers have investigated combining several image enhancement algorithms to enhance the performance of breast-region segmentation. The masses that appear in mammogram images are further analyzed and classified into four categories that include: benign, probable benign and possible malignant, probable malignant and possible benign, and malignant. The main contribution of this work is to reveal the optimal combination of various enhancement methods and to segment breast region in order to obtain better visual interpretation, analysis, and classification of mammogram masses to assist radiologists in making more accurate decisions. The experimental dataset consists of a total of more than 1300 mammogram images from both the King Hussein Cancer Center and Jordan Hospital. Results achieved tumor classification accuracy values of 90.7%. Moreover, the results showed a sensitivity of 96.2% and a specificity of 94.4% for the mass classifying algorithm. Radiologists from both institutes have acknowledged the results and confirmed that this work has lead to better visual quality images and that the segmentation and classification of tumors has aided the radiologists in making their diagnoses.
Information Sciences | 2014
Nijad Al-Najdawi; M. Noor Al-Najdawi; Sara Tedmori
The large amount of bandwidth that is required for the transmission or storage of digital videos is the main incentive for researchers to develop algorithms that aim at compressing video data (digital images) whilst keeping their quality as high as possible. Motion estimation algorithms are used for video compression as they reduce the memory requirements of any video file while maintaining its high quality. Block matching has been extensively utilized in compression algorithms for motion estimation. One of the main components of block matching techniques is search methods for block movements between consecutive video frames whose aim is to reduce the number of comparisons. One of the most effective searching methods that yield accurate results but is computationally very expensive is the Full Search algorithm. Researchers try to develop fast search motion estimation algorithms to reduce the computational cost required by full-search algorithms. In this research, the authors present a new fast search algorithm based on the hierarchical search approach, where the number of searched locations is reduced compared to the Full Search. The original image is sub-sampled into additional two levels. The Full Search is performed on the highest level where the complexity is relatively low. The Enhanced Three-Step Search Algorithm and a new proposed searching algorithm are used in the consecutive two levels. The results show that by using the standard accuracy measurements and the standard set of video sequences, the performance of the proposed hierarchal search algorithm is close to the Full Search with 83.4% reduction in complexity and with a matching quality over 98%.
international conference on computer science and information technology | 2014
Mujahed Jarad; Nijad Al-Najdawi; Sara Tedmori
Signatures are imperative biometric attributes of humans that have long been used for authorization purposes. Most organizations primarily focus on the visual appearance of the signature for verification purposes. Many documents, such as forms, contracts, bank cheques, and credit card transactions require the signing of a signature. Therefore, it is of upmost importance to be able to recognize signatures accurately, effortlessly, and in a timely manner. In this work, an artificial neural network based on the well-known Back-propagation algorithm is used for recognition and verification. To test the performance of the system, the False Reject Rate, the False Accept Rate, and the Equal Error Rate (EER) are calculated. The system was tested with 400 test signature samples, which include genuine and forged signatures of twenty individuals. The aim of this work is to limit the computer singularity in deciding whether the signature is forged or not, and to allow the signature verification personnel to participate in the deciding process through adding a label which indicates the amount of similarity between the signature which we want to recognize and the original signature. This approach allows judging the signature accuracy, and achieving more effective results.
Information and Communication Systems (ICICS), 2016 7th International Conference on | 2016
Mariam Biltawi; Wael Etaiwi; Sara Tedmori; Amjad Hudaib; Arafat Awajan
With the advent of online data, sentiment analysis has received growing attention in recent years. Sentiment analysis aims to determine the overall sentiment orientation of a speaker or writer towards a specific entity or towards a specific feature of a specific entity. A fundamental task of sentiment analysis is sentiment classification, which aims to automatically classify opinionated text as being positive, negative, or neutral. Although the literature on sentiment classification is quite extensive, only a few endeavors to classify opinionated text written in the Arabic language can be found. This paper provides a comprehensive survey of existing lexicon, machine learning, and hybrid sentiment classification techniques for Arabic language.
Multimedia Tools and Applications | 2018
Malik Qasaimeh; Raad S. Al-Qassas; Sara Tedmori
In the past few years, various lightweight cryptographic algorithms have been proposed to balance the trade-offs between the requirements of resource constrained IoT devices and the need to securely transmit and protect data. However, it is critical to analyze and evaluate these algorithms to examine their capabilities. This paper provides a thorough investigation of the randomness of ciphertext obtained from Simeck, Kasumi, DES and AES. The design of our randomness analysis is based on five metrics implemented following the guidance of the NIST statistical test suite for cryptographic applications. This analysis also provides performance and power consumption evaluations for the selected cryptographic algorithms using different platforms and measures. Results from the evaluation reveal that lightweight algorithms have competitive randomness levels, lower processing time and lower power consumption when compared to conventional algorithms.
International Journal of Information Systems in The Service Sector | 2017
Mahmood Alsaadi; Malik Qasaimeh; Sara Tedmori; Khaled Almakadmeh
Healthcare business is responsible of keeping patient data safe and secure by following the rules of the federal Health Insurance Portability and Accountability Act of 1996, HIPAA. Agile software organizations that deal with healthcare software system face a number of challenges to demonstrate that their process activities conform to the rules of HIPAA. Such organizations must establish a software process life cycle and develop procedures, tools, and methodologies that can manage the HIPAA requirements during the different stages of system development, and also must provide evidences of HIPAA conformity. This paper proposes an auditing model for HIPAA security and privacy rules in XP environments. The design of the proposed model is based on an evaluation theory which takes as its input the work of Lopez ATAM, and the standards of common criteria CC concepts. The proposed auditing model has been assessed based on four case studies. The auditing result shows that the proposed model is capable of capturing the auditing evidences in most of the selected case studies.
International Journal of Advanced Computer Science and Applications | 2016
Nijad Al-Najdawi; Sara Tedmori; Princess Sumaya; Omar A. Alzubi; Osama Dorgham; Jafar A. Alzubi
Numerous fast-search block motion estimation algorithms have been developed to circumvent the high computational cost required by the full-search algorithm. These techniques however often converge to a local minimum, which makes them subject to noise and matching errors. Hence, many spatial domain block matching algorithms have been developed in literature. These algorithms exploit the high correlation that exists between pixels inside each frame block. However, with the block transformed frequencies, block matching can be used to test the similarities between a subset of selected frequencies that correctly identify each block uniquely; therefore fewer comparisons are performed resulting in a considerable reduction in complexity. In this work, a two-level hierarchical fast search motion estimation algorithm is proposed in the frequency domain. This algorithm incorporates a novel search pattern at the top level of the hierarchy. The proposed hierarchical method for motion estimation not only produces consistent motion vectors within each large object, but also accurately estimates the motion of small objects with a substantial reduction in complexity when compared to other benchmark algorithms.
international conference on multimedia computing and systems | 2014
Sara Tedmori
Fast search motion estimation algorithms when compared to full search motion estimation algorithms often converge to a local minimum, providing a momentous reduction in computational cost. However, the motion vectors measurement process in fast search algorithms is subject to noise and matching errors. Researchers have investigated the use of mathematical tools used for stochastic estimation from noisy measurements, to seek optimal estimates. Amongst those tools, is the conventional Kalman filtering, which addresses the general problem of trying to estimate the state of a discrete-time controlled process that is governed by the linear stochastic difference. This research investigates the possible combinations of benchmark motion estimation algorithms and the Kalman filter. In this paper, the author presents an in-depth investigation and a detailed analysis on the use of the above combination, and seeks to establish conditions under which the application would be successful. Experimental results show that the above is possible only under certain conditions and constraints of certain properties of the video sequences being coded. Furthermore, a recommendation is made on when it is possible to use the adaptive Kalman filter instead of the conventional filter to enhance the motion vectors at the cost of extra complexity.
The International Arab Journal of Information Technology | 2012
Sara Tedmori; Nijad Al-Najdawi