Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hassan A. Sleiman is active.

Publication


Featured researches published by Hassan A. Sleiman.


IEEE Transactions on Knowledge and Data Engineering | 2013

A Survey on Region Extractors from Web Documents

Hassan A. Sleiman; Rafael Corchuelo

Extracting information from web documents has become a research area in which new proposals sprout out year after year. This has motivated several researchers to work on surveys that attempt to provide an overall picture of the many existing proposals. Unfortunately, none of these surveys provide a complete picture, because they do not take region extractors into account. These tools are kind of preprocessors, because they help information extractors focus on the regions of a web document that contain relevant information. With the increasing complexity of web documents, region extractors are becoming a must to extract information from many websites. Beyond information extraction, region extractors have also found their way into information retrieval, focused web crawling, topic distillation, adaptive content delivery, mashups, and metasearch engines. In this paper, we survey the existing proposals regarding region extractors and compare them side by side.


IEEE Transactions on Knowledge and Data Engineering | 2014

Trinity: On Using Trinary Trees for Unsupervised Web Data Extraction

Hassan A. Sleiman; Rafael Corchuelo

Web data extractors are used to extract data from web documents in order to feed automated processes. In this article, we propose a technique that works on two or more web documents generated by the same server-side template and learns a regular expression that models it and can later be used to extract data from similar documents. The technique builds on the hypothesis that the template introduces some shared patterns that do not provide any relevant data and can thus be ignored. We have evaluated and compared our technique to others in the literature on a large collection of web documents; our results demonstrate that our proposal performs better than the others and that input errors do not have a negative impact on its effectiveness; furthermore, its efficiency can be easily boosted by means of a couple of parameters, without sacrificing its effectiveness.


Knowledge Based Systems | 2013

TEX: An efficient and effective unsupervised Web information extractor

Hassan A. Sleiman; Rafael Corchuelo

The World Wide Web is an immense information resource. Web information extraction is the task that transforms human friendly Web information into structured information that can be consumed by automated business processes. In this article, we propose an unsupervised information extractor that works on two or more web documents generated by the same server side template. It finds and removes shared token sequences amongst these web documents until finding the relevant information that should be extracted from them. The technique is completely unsupervised and does not require maintenance, it allows working on malformed web documents, and does not require the relevant information to be formatted using repetitive patterns. Our complexity analysis reveals that our proposal is computationally tractable and our empirical study on real-world web documents demonstrates that it performs very fast and has a very high precision and recall.


Neurocomputing | 2014

A class of neural-network-based transducers for web information extraction

Hassan A. Sleiman; Rafael Corchuelo

The Web is a huge and still growing information repository that has attracted the attention of many companies. Many such companies rely on information extractors to integrate information that is buried into semi-structured web documents into automatic business processes. Many information extractors build on extraction rules, which can be handcrafted or learned using supervised or unsupervised techniques. The literature provides a variety of techniques to learn information extraction rules that build on ad hoc machine learning techniques. In this paper, we propose a hybrid approach that explores the use of standard machine-learning techniques to extract web information. We have specifically explored using neural networks; our results show that our proposal outperforms three state-of-the-art techniques in the literature, which opens up quite a new approach to information extraction.


practical applications of agents and multi agent systems | 2010

Integrating Deep-Web Information Sources

Iñaki Fernández de Viana; Inma Hernández; Patricia Jiménez; Carlos R. Rivero; Hassan A. Sleiman

Deep-web information sources are difficult to integrate into automated business processes if they only provide a search form. A wrapping agent is a piece of software that allows a developer to query such information sources without worrying about the details of interacting with such forms. Our goal is to help software engineers construct wrapping agents that interpret queries written in high-level structured languages.We think that this shall definitely help reduce integration costs because this shall relieve developers from the burden of transforming their queries into low-level interactions in an ad-hoc manner. In this paper, we report on our reference framework, delve into the related work, and highlight current research challenges. This is intended to help guide future research efforts in this area.


practical applications of agents and multi agent systems | 2012

Information Extraction Framework

Hassan A. Sleiman; Rafael Corchuelo

The literature provides many techniques to infer rules that can be used to configureweb information extractors.Unfortunately, these techniques have been developed independently, which makes it very difficult to compare the results: there is not even a collection of datasets on which these techniques can be assessed. Furthermore, there is not a common infrastructure to implement these techniques, which makes implementing them costly. In this paper, we propose a framework that helps software engineers implement their techniques and compare the results. Having such a framework allows comparing techniques side by side and our experiments prove that it helps reduce development costs.


conference on advanced information systems engineering | 2012

A Reference Architecture to Devise Web Information Extractors

Hassan A. Sleiman; Rafael Corchuelo

The Web is the largest repository of human-friendly information. Unfortunately, web information is embedded in formatting tags and is surrounded by irrelevant information. Researchers are working on information extractors that allow transforming this information into structured data for its later integration into automated processes. Devising a new information extraction technique requires an array of tasks that are specific to this technique and many tasks that are actually common between all techniques. The lack of a reference architectural proposal in the literature to guide software engineers in the design and implementation of information extractors, amounts to little reuse and the focus is usually blurred because of irrelevant details. In this paper, we present a reference architecture to design and implement rule learners for information extractors. We have implemented a software framework to support our architecture, and we have validated it by means of four case studies and a number of experiments that prove that our proposal helps reduce development costs significantly.


web information systems modeling | 2011

A conceptual framework for efficient web crawling in virtual integration contexts

Inma Hernández; Hassan A. Sleiman; David Ruiz; Rafael Corchuelo

Virtual Integration systems require a crawling tool able to navigate and reach relevant pages in the Web in an efficient way. Existing proposals in the crawling area are aware of the efficiency problem, but still most of them need to download pages in order to classify them as relevant or not. In this paper, we present a conceptual framework for designing crawlers supported by a web page classifier that relies solely on URLs to determine page relevance. Such a crawler is able to choose in each step only the URLs that lead to relevant pages, and therefore reduces the number of unnecessary pages downloaded, optimising bandwidth and making it efficient and suitable for virtual integration systems. Our preliminary experiments show that such a classifier is able to distinguish between links leading to different kinds of pages, without previous intervention from the user.


intelligent systems design and applications | 2011

An architecture for web information agents

Hassan A. Sleiman; Rafael Corchuelo

Many authors are researching on information extraction techniques to transform the semi-structured information in typical web pages into structured information. When a researcher devises a new technique, he or she has to validate it, which requires implementing it, experimenting, gathering precision and recall results, comparing it to others, and drawing conclusions. This involves an array of details that are specific to this technique, but many others that are actually shared with other proposals. Unfortunately, the literature does not provide a single up-to-date platform to guide software engineers and researches in the design and implementation of information extractors. In this paper, we present a platform to design and implement learners of information extraction rules. Due to space constraints, we focus on the class of learners that learn hierarchical transducers. We have implemented our platform, and we have validated it by means of three case studies.


Knowledge Based Systems | 2016

ARIEX: Automated ranking of information extractors

Patricia Jiménez; Rafael Corchuelo; Hassan A. Sleiman

Abstract Information extractors are used to transform the user-friendly information in a web document into structured information that can be used to feed a knowledge-based system. Researchers are interested in ranking them to find out which one performs the best. Unfortunately, many rankings in the literature are deficient. There are a number of formal methods to rank information extractors, but they also have many problems and have not reached widespread popularity. In this article, we present ARIEX, which is an automated method to rank web information extraction proposals. It does not have any of the problems that we have identified in the literature. Our proposal shall definitely help authors make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it shall also help practitioners make informed decisions on which proposal is the most adequate for a particular problem.

Collaboration


Dive into the Hassan A. Sleiman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos R. Rivero

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alberto Pan

University of A Coruña

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge