Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Taher Hamza is active.

Publication


Featured researches published by Taher Hamza.


PLOS Computational Biology | 2013

Next-Generation Sequence Assembly: Four Stages of Data Processing and Computational Challenges

Sara El-Metwally; Taher Hamza; Magdi Zakaria; Mohamed Helmy

Decoding DNA symbols using next-generation sequencers was a major breakthrough in genomic research. Despite the many advantages of next-generation sequencers, e.g., the high-throughput sequencing rate and relatively low cost of sequencing, the assembly of the reads produced by these sequencers still remains a major challenge. In this review, we address the basic framework of next-generation genome sequence assemblers, which comprises four basic stages: preprocessing filtering, a graph construction process, a graph simplification process, and postprocessing filtering. Here we discuss them as a framework of four stages for data analysis and processing and survey variety of techniques, algorithms, and software tools used during each stage. We also discuss the challenges that face current assemblers in the next-generation environment to determine the current state-of-the-art. We recommend a layered architecture approach for constructing a general assembler that can handle the sequences generated by different sequencing platforms.


international conference on computer engineering and systems | 2007

Mining arabic text using soft-matching association rules

Aya M. Al-Zoghby; Ahmed Sharaf Eldin; Nabil A. Ismail; Taher Hamza

Text mining concerns the discovery of knowledge from unstructured textual data. One important task is the discovery of rules that relate specific words and phrases. Textual entries in many database fields exhibit minor variations that may prevent mining algorithms from discovering important patterns. Variations can arise from typographical errors, misspellings, abbreviations, as well as other sources like ambiguity. Ambiguity may be due to the derivation feature, which is very common in the Arabic language. This paper introduces a new system developed to discover soft-matching association rules using a similarity measurements based on the derivation feature of the Arabic language. In addition, it presents the features of using Frequent Closed Item-sets (FCI) concept in mining the association rules rather than Frequent Itemsets (FI).


2007 ITI 5th International Conference on Information and Communications Technology | 2007

A new methodology for Web testing

Fawzy A. Torkey; Arabi Keshk; Taher Hamza; Amal Ibrahim

As Web sites becoming a fundamental component of businesses, quality of service will be one of the top management concerns. Users, normally, does not care about site failures, traffic jams, network bandwidth, or other indicators of system failures. To an online customer, quality of service means fast, predictable response service, level of a Web site noted in a real time. User measures the quality by response time, availability, reliability, predictability, and cost. Poor quality implies that the customer will no longer visit the site and hence the organization may loose business. The issues that affects the quality are, broken pages and faulty images, CGI-bin error messages, complex colour combinations, no back link, multiple and frequent links, etc. So we try to build a good program that can scan web site for broken links, broken images, broken pages and other common web sit e faults. Because web site cannot test as a whole in one attempt we rely in our implementation to decompose the behavior of the web site into testable components then mapping these components onto testable objects. Then we prove by using Jmeter performance testing tool that broken components on a web site has a bad effect in its performance.


International Journal of Network Security | 2017

Pre-image Resistant Cancelable Biometrics Scheme Using Bidirectional Memory Model

Mayada Tarek; Osama Ouda; Taher Hamza

Cancelable biometrics is a promising template protection scheme which relies on encoding the raw biometric data using non-invertible transformation function. Existing cancelable biometrics schemes ensure recoverability of compromised templates as well as users’ privacy. However, these schemes cannot resist pre-image attacks. In this article, a pre-image resistant cancelable biometrics scheme is proposed, where associative memory is utilized to encode the cancelable transformation parameters with the privilege of high recognition performance. Bidirectional memory model has been suggested to memorize each user’s associated key using his biometric data based on association connectors. These connector values can be safely saved in the storage along with the cancelable biometric template. The cancelable template is generated using XOR operation between the biometric data and the associated key. The simulated experiments conducted on CASIA-IrisV3-Interval dataset show that the presented resistant scheme does not affect the classification power of the raw biometric data significantly. Moreover, the resistance of the presented scheme against complete or approximate disclosure of the raw biometric template is achieved.


Bioinformatics | 2016

LightAssembler: fast and memory-efficient assembly algorithm for high-throughput sequencing reads

Sara El-Metwally; Magdi Zakaria; Taher Hamza

MOTIVATION The deluge of current sequenced data has exceeded Moores Law, more than doubling every 2 years since the next-generation sequencing (NGS) technologies were invented. Accordingly, we will able to generate more and more data with high speed at fixed cost, but lack the computational resources to store, process and analyze it. With error prone high throughput NGS reads and genomic repeats, the assembly graph contains massive amount of redundant nodes and branching edges. Most assembly pipelines require this large graph to reside in memory to start their workflows, which is intractable for mammalian genomes. Resource-efficient genome assemblers combine both the power of advanced computing techniques and innovative data structures to encode the assembly graph efficiently in a computer memory. RESULTS LightAssembler is a lightweight assembly algorithm designed to be executed on a desktop machine. It uses a pair of cache oblivious Bloom filters, one holding a uniform sample of [Formula: see text]-spaced sequenced [Formula: see text]-mers and the other holding [Formula: see text]-mers classified as likely correct, using a simple statistical test. LightAssembler contains a light implementation of the graph traversal and simplification modules that achieves comparable assembly accuracy and contiguity to other competing tools. Our method reduces the memory usage by [Formula: see text] compared to the resource-efficient assemblers using benchmark datasets from GAGE and Assemblathon projects. While LightAssembler can be considered as a gap-based sequence assembler, different gap sizes result in an almost constant assembly size and genome coverage. AVAILABILITY AND IMPLEMENTATION https://github.com/SaraEl-Metwally/LightAssembler CONTACT: [email protected] information: Supplementary data are available at Bioinformatics online.


International Journal of Computational Intelligence Systems | 2015

Secure and Efficient Biometric-Data Binarization using Multi-Objective Optimization.

Eslam Hamouda; Xiaohui Yuan; Osama Ouda; Taher Hamza

AbstractBiometric system databases are vulnerable to many types of attacks. To address this issue, several biometric template protection systems have been proposed to protect biometric data against unauthorized use. Many of biometric protection systems require the biometric templates to be represented in a binary form. Therefore, extracting binary templates from real-valued biometric data is a key step in such biometric data protection systems. In addition, binary representation of biometric data can speed-up the matching process and reduce the storage capacity required to store the enrolled templates. The main challenge of existing biometric data binarization approaches is to retain the discrimination power of the original real-valued templates after binarization. In this paper, we propose a secure and efficient biometric data binarization scheme that employs multi-objective optimization using Nondominated Sorting Genetic Algorithm (NSGA-II). The goal of the proposed method is to find optimal quantization ...


international conference on informatics and systems | 2010

Naive Bayes Classifier based Arabic document categorization

Hatem M. Noaman; Samir Elmougy; Ahmed Ghoneim; Taher Hamza


Archive | 2008

Naïve Bayes Classifier for Arabic Word Sense Disambiguation

Samir Elmougy; Taher Hamza; Hatem M. Noaman


Journal of Emerging Technologies in Web Intelligence | 2013

Arabic Semantic Web Applications – A Survey

Aya M. Al-Zoghby; Ahmed Sharaf Eldin Ahmed; Taher Hamza


Archive | 2012

Manet Load Balancing Parallel Routing Protocol

Hesham A. Ali; Taher Hamza; Shadia Sarhan

Collaboration


Dive into the Taher Hamza's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaohui Yuan

University of North Texas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge