Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carsten Eickhoff is active.

Publication


Featured researches published by Carsten Eickhoff.


international world wide web conferences | 2016

Probabilistic Bag-Of-Hyperlinks Model for Entity Linking

Octavian-Eugen Ganea; Marina Ganea; Aurelien Lucchi; Carsten Eickhoff; Thomas Hofmann

Many fundamental problems in natural language processing rely on determining what entities appear in a given text. Commonly referenced as entity linking, this step is a fundamental component of many NLP tasks such as text understanding, automatic summarization, semantic search or machine translation. Name ambiguity, word polysemy, context dependencies and a heavy-tailed distribution of entities contribute to the complexity of this problem. We here propose a probabilistic approach that makes use of an effective graphical model to perform collective entity disambiguation. Input mentions (i.e., linkable token spans) are disambiguated jointly across an entire document by combining a document-level prior of entity co-occurrences with local information captured from mentions and their surrounding context. The model is based on simple sufficient statistics extracted from data, thus relying on few parameters to be learned. Our method does not require extensive feature engineering, nor an expensive training procedure. We use loopy belief propagation to perform approximate inference. The low complexity of our model makes this step sufficiently fast for real-time usage. We demonstrate the accuracy of our approach on a wide range of benchmark datasets, showing that it matches, and in many cases outperforms, existing state-of-the-art methods.


cross language evaluation forum | 2017

Overview of ImageCLEF 2017: information extraction from images

Bogdan Ionescu; Henning Müller; Mauricio Villegas; Helbert Arenas; Giulia Boato; Duc-Tien Dang-Nguyen; Yashin Dicente Cid; Carsten Eickhoff; Alba Garcia Seco de Herrera; Cathal Gurrin; Bayzidul Islam; Vassili Kovalev; Vitali Liauchuk; Josiane Mothe; Luca Piras; Michael Riegler; Immanuel Schwall

This paper presents an overview of the ImageCLEF 2017 evaluation campaign, an event that was organized as part of the CLEF (Conference and Labs of the Evaluation Forum) labs 2017. ImageCLEF is an ongoing initiative (started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval for providing information access to collections of images in various usage scenarios and domains. In 2017, the 15th edition of ImageCLEF, three main tasks were proposed and one pilot task: (1) a LifeLog task about searching in LifeLog data, so videos, images and other sources; (2) a caption prediction task that aims at predicting the caption of a figure from the biomedical literature based on the figure alone; (3) a tuberculosis task that aims at detecting the tuberculosis type from CT (Computed Tomography) volumes of the lung and also the drug resistance of the tuberculosis; and (4) a remote sensing pilot task that aims at predicting population density based on satellite images. The strong participation of over 150 research groups registering for the four tasks and 27 groups submitting results shows the interest in this benchmarking campaign despite the fact that all four tasks were new and had to create their own community.


web search and data mining | 2018

Cognitive Biases in Crowdsourcing

Carsten Eickhoff

Crowdsourcing has become a popular paradigm in data curation, annotation and evaluation for many artificial intelligence and information retrieval applications. Considerable efforts have gone into devising effective quality control mechanisms that identify or discourage cheat submissions in an attempt to improve the quality of noisy crowd judgments. Besides purposeful cheating, there is another source of noise that is often alluded to but insufficiently studied: Cognitive biases. This paper investigates the prevalence and effect size of a range of common cognitive biases on a standard relevance judgment task. Our experiments are based on three sizable publicly available document collections and note significant detrimental effects on annotation quality, system ranking and the performance of derived rankers when task design does not account for such biases.


european conference on information retrieval | 2016

Probabilistic Local Expert Retrieval

Wen Li; Carsten Eickhoff; Arjen P. de Vries

This paper proposes a range of probabilistic models of local expertise based on geo-tagged social network streams. We assume that frequent visits result in greater familiarity with the location in question. To capture this notion, we rely on spatio-temporal information from users’ online check-in profiles. We evaluate the proposed models on a large-scale sample of geo-tagged and manually annotated Twitter streams. Our experiments show that the proposed methods outperform both intuitive baselines as well as established models such as the iterative inference scheme.


conference on information and knowledge management | 2016

Active Content-Based Crowdsourcing Task Selection

Piyush Bansal; Carsten Eickhoff; Thomas Hofmann

Crowdsourcing has long established itself as a viable alternative to corpus annotation by domain experts for tasks such as document relevance assessment. The crowdsourcing process traditionally relies on high degrees of label redundancy in order to mitigate the detrimental effects of individually noisy worker submissions. Such redundancy comes at the cost of increased label volume, and, subsequently, monetary requirements. In practice, especially as the size of datasets increases, this is undesirable. In this paper, we focus on an alternate method that exploits document information instead, to infer relevance labels for unjudged documents. We present an active learning scheme for document selection that aims at maximising the overall relevance label prediction accuracy, for a given budget of available relevance judgements by exploiting system-wide estimates of label variance and mutual information. Our experiments are based on TREC 2011 Crowdsourcing Track data and show that our method is able to achieve state-of-the-art performance while requiring 17% - 25% less budget.


international acm sigir conference on research and development in information retrieval | 2017

Computing Web-scale Topic Models using an Asynchronous Parameter Server

Rolf Jagerman; Carsten Eickhoff; Maarten de Rijke

Topic models such as Latent Dirichlet Allocation (LDA) have been widely used in information retrieval for tasks ranging from smoothing and feedback methods to tools for exploratory search and discovery. However, classical methods for inferring topic models do not scale up to the massive size of todays publicly available Web-scale data sets. The state-of-the-art approaches rely on custom strategies, implementations and hardware to facilitate their asynchronous, communication-intensive workloads. We present APS-LDA, which integrates state-of-the-art topic modeling with cluster computing frameworks such as Spark using a novel asynchronous parameter server. Advantages of this integration include convenient usage of existing data processing pipelines and eliminating the need for disk writes as data can be kept in memory from start to finish. Our goal is not to outperform highly customized implementations, but to propose a general high-performance topic modeling framework that can easily be used in todays data processing pipelines. We compare APS-LDA to the existing Spark LDA implementations and show that our system can, on a 480-core cluster, process up to 135× more data and 10× more topics without sacricing model quality.


international acm sigir conference on research and development in information retrieval | 2016

A Cross-Platform Collection of Social Network Profiles

Maria Han Veiga; Carsten Eickhoff

The proliferation of Internet-enabled devices and services has led to a shifting balance between digital and analogue aspects of our everyday lives. In the face of this development there is a growing demand for the study of privacy hazards, the potential for unique user deanonymization and information leakage between the various social media profiles many of us maintain. To enable the structured study of such adversarial effects, this paper presents a dedicated dataset of cross-platform social network personas (i.e., the same person has accounts on multiple platforms). The corpus comprises 850 users who generate predominantly English content. Each user object contains the online footprint of the same person in three distinct social networks: Twitter, Instagram and Foursquare. In total, it encompasses over 2.5M tweets, 340k check-ins and 42k Instagram posts. We describe the collection methodology, characteristics of the dataset, and how to obtain it. Finally, we discuss a common use case, cross-platform user identification.


european conference on information retrieval | 2018

Web2Text: Deep Structured Boilerplate Removal

Thijs Vogels; Octavian-Eugen Ganea; Carsten Eickhoff

Web pages are a valuable source of information for many natural language processing and information retrieval tasks. Extracting the main content from those documents is essential for the performance of derived applications. To address this issue, we introduce a novel model that performs sequence labeling to collectively classify all text blocks in an HTML page as either boilerplate or main content. Our method uses a hidden Markov model on top of potentials derived from DOM tree features using convolutional neural networks. The proposed method sets a new state-of-the-art performance for boilerplate removal on the CleanEval benchmark. As a component of information retrieval pipelines, it improves retrieval performance on the ClueWeb12 collection.


european conference on information retrieval | 2018

Biomedical Question Answering via Weighted Neural Network Passage Retrieval.

Ferenc Galkó; Carsten Eickhoff

The amount of publicly available biomedical literature has been growing rapidly in recent years, yet question answering systems still struggle to exploit the full potential of this source of data. In a preliminary processing step, many question answering systems rely on retrieval models for identifying relevant documents and passages. This paper proposes a weighted cosine distance retrieval scheme based on neural network word embeddings. Our experiments are based on publicly available data and tasks from the BioASQ biomedical question answering challenge and demonstrate significant performance gains over a wide range of state-of-the-art models.


cross language evaluation forum | 2018

Overview of ImageCLEF 2018: Challenges, Datasets and Evaluation

Bogdan Ionescu; Henning Müller; Mauricio Villegas; Alba Garcia Seco de Herrera; Carsten Eickhoff; Vincent Andrearczyk; Yashin Dicente Cid; Vitali Liauchuk; Vassili Kovalev; Sadid A. Hasan; Yuan Ling; Oladimeji Farri; Joey Liu; Matthew P. Lungren; Duc-Tien Dang-Nguyen; Luca Piras; Michael Riegler; Liting Zhou; Mathias Lux; Cathal Gurrin

This paper presents an overview of the ImageCLEF 2018 evaluation campaign, an event that was organized as part of the CLEF (Conference and Labs of the Evaluation Forum) Labs 2018. ImageCLEF is an ongoing initiative (it started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval with the aim of providing information access to collections of images in various usage scenarios and domains. In 2018, the 16th edition of ImageCLEF ran three main tasks and a pilot task: (1) a caption prediction task that aims at predicting the caption of a figure from the biomedical literature based only on the figure image; (2) a tuberculosis task that aims at detecting the tuberculosis type, severity and drug resistance from CT (Computed Tomography) volumes of the lung; (3) a LifeLog task (videos, images and other sources) about daily activities understanding and moment retrieval, and (4) a pilot task on visual question answering where systems are tasked with answering medical questions. The strong participation, with over 100 research groups registering and 31 submitting results for the tasks, shows an increasing interest in this benchmarking campaign.

Collaboration


Dive into the Carsten Eickhoff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alba Garcia Seco de Herrera

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar

Henning Müller

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vincent Andrearczyk

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar

Yashin Dicente Cid

University of Applied Sciences Western Switzerland

View shared research outputs
Top Co-Authors

Avatar

Bogdan Ionescu

Politehnica University of Bucharest

View shared research outputs
Top Co-Authors

Avatar

Mauricio Villegas

Polytechnic University of Valencia

View shared research outputs
Researchain Logo
Decentralizing Knowledge