Basheer Al-Duwairi
Jordan University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Basheer Al-Duwairi.
international conference on computer communications and networks | 2004
Basheer Al-Duwairi; Thomas E. Daniels
Recently, several schemes have been proposed for IP traffic source identification for tracing attacks that employ source address spoofing such as denial of service (DoS) attacks. Most of these schemes are based on packet marking (i.e., augmenting IP packets with partial path information). A major challenge to packet marking schemes is the limited space available in the IP header for marking purposes. In this paper, we focus on this issue and propose a topology based encoding schemes supported by real Internet measurements. In particular, we propose an idealized deterministic edge append scheme in which we assume that the IP header can be modified to include the marking option field of fixed size. Also, we propose a deterministic pipelined packet marking scheme that is backward compatible with IPv4 (i.e., no IP header modification). The validity of both schemes depends directly on the statistical information that we extract from large data sets that represent Internet maps. Our studies show that it is possible to encode an entire path using 52 bits
international conference on internet monitoring and protection | 2010
Basheer Al-Duwairi; Lina Al-Ebbini
This paper proposes BotDigger, a fuzzy logic-based botnet detection system. In this system, we derive a set of logical rules based on a well known botnet characteristics. Utilizing these rules, an adaptive logic algorithm will be applied on network traffic traces searching for botnet footprints and associating a trust level for each host present in the sampled data. Future work will focus on evaluating the proposed approach using real traffic traces.
ieee international conference computer and communications | 2005
Basheer Al-Duwairi; G. Manimaran
This paper presents a novel scheme to mitigate the effect of SYN flooding attacks. The scheme, called intentional dropping based filtering, is based on the observation of clients persistence (i.e., clients reaction to packet loss by subsequent retransmissions) which is very widespread as it is built in TCPs connection setup. The main idea is to intentionally drop the first SYN packet of each connection request. Subsequent SYN packet from a request is passed only if it adheres to the TCPs timeout mechanism. Our analysis shows that the proposed scheme reduces attackers effective attack rate significantly with an acceptable increase in connection establishment latency.
Journal of Advanced Research | 2014
Basheer Al-Duwairi; Ahmad T. Al-Hammouri
Fast flux networks represent a special type of botnets that are used to provide highly available web services to a backend server, which usually hosts malicious content. Detection of fast flux networks continues to be a challenging issue because of the similar behavior between these networks and other legitimate infrastructures, such as CDNs and server farms. This paper proposes Fast Flux Watch (FF-Watch), a mechanism for online detection of fast flux agents. FF-Watch is envisioned to exist as a software agent at leaf routers that connect stub networks to the Internet. The core mechanism of FF-Watch is based on the inherent feature of fast flux networks: flux agents within stub networks take the role of relaying client requests to point-of-sale websites of spam campaigns. The main idea of FF-Watch is to correlate incoming TCP connection requests to flux agents within a stub network with outgoing TCP connection requests from the same agents to the point-of-sale website. Theoretical and traffic trace driven analysis shows that the proposed mechanism can be utilized to efficiently detect fast flux agents within a stub network.
international conference on communications | 2009
Basheer Al-Duwairi; G. Manimaran
Botnet-based distributed denial of service (DDoS) attacks represent an emerging and sophisticated threat for todays Internet. Attackers are now able to mimic the behavior of legitimate users to a great extent, making the issue of countering these attacks very challenging. In this paper, we propose a simple yet effective scheme that enables an ISPs edge routers to pass a great percentage of legitimate traffic, that is destined to a web server under DDoS attack within that ISP, while filtering all other traffic. The proposed scheme, called JUST-Google, is based on the fact that web search engines (especially Google™) represent the entrance for todays web, thus making it in a strategic position to defend against these attacks. The main idea is that Google™ can assist in identifying human users from bot programs by directing users who want to access a web site under attack to a group of nodes that will perform authentication in which users are required to solve a reverse Turing test to obtain access to the web server. Performance analysis shows that the proposed scheme would enable legitimate clients to access a web site that is under attack with high probability.
Computer Communications | 2006
Basheer Al-Duwairi; G. Manimaran
Reflector based DDoS attacks are feasible in variety of request/reply based protocols including TCP, UDP, ICMP, and DNS. To mitigate these attacks, we advocate the concept of victim assistance and use it in the context of a novel scheme called pairing based filtering (PF). The main idea of the PF scheme is to validate incoming reply packets by pairing them, in a distributed manner, with the corresponding request packets. This pairing is performed at the edge routers of the ISP perimeter that contains the victim rather than at the edge router to which the victim is directly connected, leading to protection from bandwidth exhaustion attacks in addition to the protection from victims resource exhaustion attacks. We evaluate the proposed scheme through analytical studies using two performance metrics, namely, the probability of allowing an attack packet into the ISP network, and the probability of filtering a legitimate packet. Our analysis shows that the proposed scheme offers a high filtering rate for attack traffic, while causing negligible collateral damage to legitimate traffic.
transactions on emerging telecommunications technologies | 2017
Ahmad T. Al-Hammouri; Zaid Al-Ali; Basheer Al-Duwairi
Web server overload resulting from an application layer–based distributed denial-of-service (DDoS) attack or a flash crowd event continues to be a major problem in todays internet because it renders the Web server unavailable in both cases. In this paper, we propose a novel system, called ReCAP, that handles server overload resulting from application layer–based DDoS attacks or flash crowd events. The system is envisioned as a service that can be provided to websites that have limited resources with no infrastructure in place to handle these events. The main goal of ReCAP is to filter attack traffic in case of a DDoS attack event and to provide users with basic information during a flash crowd event. The proposed system is composed of 2 main modules: (1) the HTTPredirect module, which is a stateless Hypertext Transfer Protocol server that redirects Web requests destined to the targeted Web server to the second module, and (2) the distributed Completely Automated Public Turing Test To Tell Computers and Humans Apart (CAPTCHA) service, which comprises a large number of powerful nodes geographically and suitably distributed in the internet acting as a large distributed firewall. All requests to the origin Web server are redirected to the CAPTCHA nodes, which can segregate legitimate clients from automated attacks by requiring them to solve a challenge. Upon successful response, legitimate clients (humans) are forwarded through a given CAPTCHA node to the Web server. These CAPTCHA proxies are envisioned to be placed intrinsically at the edge of the network in the proximity of the clients to curb communication delays, and thus perceived response times, and to relieve the core network from further traffic congestion. In particular, such organization fits squarely in the fifth use case scenario presented in the European Telecommunications Standards Institute Mobile Edge Computing Industry Specification Groups introductory technical paper on Mobile-Edge Computing. In conclusion, the performance evaluation shows that the proposed system is able to mitigate application-layer DDoS attacks while incurring acceptable delays for legitimate clients as a result of redirecting them to and via CAPTCHA nodes.
high performance interconnects | 2004
G. Manimaran; Basheer Al-Duwairi
The goal of this tutorial is to provide a comprehensive understanding of the state-of-the-art research and practice in Internet infrastructure security, to its audience. In addition to discussions on attacks and counter-measures, issues such as performance, scalability, deployability, and high speed implementations will also be discussed.
computational science and engineering | 2014
Zakaria Al-Qudah; Basheer Al-Duwairi; Osama Al-Khaleel
Distributed denial of service DDoS attacks constitute an ever growing threat to the internet due to the scale of these attacks and the difficulty of mitigating them. In this paper, we propose a CDN-based DDoS protection service to counter attacks targeting application layer of web servers. These attacks mimic flash crowd events by using large size botnets to generate high volume requests to get certain objects from the target. The proposed scheme, called Hideme, leverages the already-deployed, highly available, and distributed massive infrastructure of CDNs to provide protection against DDoS attacks. A website subscribing to this service can hide behind the DDoS protection provider when it becomes under attack. To achieve this goal, Hideme combines the idea of using CAPTCHA by CDN edge servers to distinguish humans from bots and the idea of migration to a secret IP address during the attack period. We evaluate the proposed scheme through extensive experiments over Planetlab. Our results show that the proposed scheme exhibits better performance in terms of effective download throughput while blocking malicious requests.
Journal of Networks | 2013
Basheer Al-Duwairi; Zakaria Al-Qudah; Manimaran Govindarasu
Botnet-based distributed denial of service (DDoS) attacks represent an emerging and sophisticated threat for today’s Internet. Attackers are now able to mimic the behavior of legitimate users to a great extent, making the issue of countering these attacks very challenging. This paper proposes a novel scheme to mitigate botnet-based DDoS attacks. The proposed scheme, called JUST-Google, utilizes Google’s strategic position as an entrance for today’s Internet to distinguish between legitimate traffic and attack traffic. The main idea of JUST-Google is to let ISP’s edge routers allow traffic originating from sources that are approved by Google and destined to a victim within that ISP to pass while filtering all other traffic destined to the same victim. In this context, we propose that GoogleTM can offer a paid service to identify legitimate sources by directing users who want to access a web site under attack to a group of nodes that will perform authentication in which users are required to solve a reverse Turing test to obtain access to the web server. We evaluate the proposed scheme through a combination of theoretical analysis and experimental studies. Our studies show that JUST-Google provides a great chance for legitimate clients to access a web site that is under a botnet-based DDoS attack without imposing a significant overhead.