Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Dahlmeier is active.

Publication


Featured researches published by Daniel Dahlmeier.


Bioinformatics | 2010

Domain adaptation for semantic role labeling in the biomedical domain

Daniel Dahlmeier; Hwee Tou Ng

MOTIVATION Semantic role labeling (SRL) is a natural language processing (NLP) task that extracts a shallow meaning representation from free text sentences. Several efforts to create SRL systems for the biomedical domain have been made during the last few years. However, state-of-the-art SRL relies on manually annotated training instances, which are rare and expensive to prepare. In this article, we address SRL for the biomedical domain as a domain adaptation problem to leverage existing SRL resources from the newswire domain. RESULTS We evaluate the performance of three recently proposed domain adaptation algorithms for SRL. Our results show that by using domain adaptation, the cost of developing an SRL system for the biomedical domain can be reduced significantly. Using domain adaptation, our system can achieve 97% of the performance with as little as 60 annotated target domain abstracts. AVAILABILITY Our BioKIT system that performs SRL in the biomedical domain as described in this article is implemented in Python and C and operates under the Linux operating system. BioKIT can be downloaded at http://nlp.comp.nus.edu.sg/software. The domain adaptation software is available for download at http://www.mysmu.edu/faculty/jingjiang/software/DALR.html. The BioProp corpus is available from the Linguistic Data Consortium http://www.ldc.upenn.edu.


empirical methods in natural language processing | 2009

Joint Learning of Preposition Senses and Semantic Roles of Prepositional Phrases

Daniel Dahlmeier; Hwee Tou Ng; Tanja Schultz

The sense of a preposition is related to the semantics of its dominating prepositional phrase. Knowing the sense of a preposition could help to correctly classify the semantic role of the dominating prepositional phrase and vice versa. In this paper, we propose a joint probabilistic model for word sense disambiguation of prepositions and semantic role labeling of prepositional phrases. Our experiments on the PropBank corpus show that jointly learning the word sense and the semantic role leads to an improvement over state-of-the-art individual classifier models on the two tasks.


meeting of the association for computational linguistics | 2017

An Unsupervised Neural Attention Model for Aspect Extraction

Ruidan He; Wee Sun Lee; Hwee Tou Ng; Daniel Dahlmeier

Methods, systems, and computer-readable storage media for receiving a vocabulary, the vocabulary including text data that is provided as at least a portion of raw data, the raw data being provided in a computer-readable file, associating each word in the vocabulary with a feature vector, providing a sentence embedding for each sentence of the vocabulary based on a plurality of feature vectors to provide a plurality of sentence embeddings, providing a reconstructed sentence embedding for each sentence embedding based on a weighted parameter matrix to provide a plurality of reconstructed sentence embeddings, and training the unsupervised neural attention model based on the sentence embeddings and the reconstructed sentence embeddings to provide a trained neural attention model, the trained neural attention model being used to automatically determine aspects from the vocabulary.


empirical methods in natural language processing | 2016

Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis.

Wenya Wang; Sinno Jialin Pan; Daniel Dahlmeier; Xiaokui Xiao

In aspect-based sentiment analysis, extracting aspect terms along with the opinions being expressed from user-generated content is one of the most important subtasks. Previous studies have shown that exploiting connections between aspect and opinion terms is promising for this task. In this paper, we propose a novel joint model that integrates recursive neural networks and conditional random fields into a unified framework for explicit aspect and opinion terms co-extraction. The proposed model learns high-level discriminative features and double propagate information between aspect and opinion terms, simultaneously. Moreover, it is flexible to incorporate hand-crafted features into the proposed model to further boost its information extraction performance. Experimental results on the SemEval Challenge 2014 dataset show the superiority of our proposed model over several baseline methods as well as the winning systems of the challenge.


Artificial Intelligence | 2018

Memory networks for fine-grained opinion mining

Wenya Wang; Sinno Jialin Pan; Daniel Dahlmeier

Abstract Fine-grained opinion mining has attracted increasing attention recently because of its benefits for providing richer information compared with coarse-grained sentiment analysis. Under this problem, there are several existing works focusing on aspect (or opinion) terms extraction which utilize the syntactic relations among the words given by a dependency parser. These approaches, however, require additional information and highly depend on the quality of the parsing results. As a result, they may perform poorly on user-generated texts, such as product reviews, tweets, etc., whose syntactic structure is not precise. In this work, we offer an end-to-end deep learning model without any preprocessing. The model consists of a memory network that automatically learns the complicated interactions among aspect words and opinion words. Moreover, we extend the network with a multi-task manner to solve a finer-grained opinion mining problem, which is more challenging than the traditional fine-grained opinion mining problem. To be specific, the finer-grained problem involves identification of aspect and opinion terms within each sentence, as well as categorization of the identified terms at the same time. To this end, we develop an end-to-end multi-task memory network, where aspect/opinion terms extraction for a specific category is considered as a task, and all the tasks are learned jointly by exploring commonalities and relationships among them. We demonstrate state-of-the-art performance of our proposed model on several benchmark datasets.


meeting of the association for computational linguistics | 2017

On the Challenges of Translating NLP Research into Commercial Products.

Daniel Dahlmeier

This paper highlights challenges in industrial research related to translating research in natural language processing into commercial products. While the interest in natural language processing from industry is significant, the transfer of research to commercial products is non-trivial and its challenges are often unknown to or underestimated by many researchers. I discuss current obstacles and provide suggestions for increasing the chances for translating research to commercial success based on my experience in industrial research.


workshop on innovative use of nlp for building educational applications | 2013

Building a Large Annotated Corpus of Learner English: The NUS Corpus of Learner English

Daniel Dahlmeier; Hwee Tou Ng; Siew Mei Wu


north american chapter of the association for computational linguistics | 2012

Better Evaluation for Grammatical Error Correction

Daniel Dahlmeier; Hwee Tou Ng


meeting of the association for computational linguistics | 2011

Grammatical Error Correction with Alternating Structure Optimization

Daniel Dahlmeier; Hwee Tou Ng


empirical methods in natural language processing | 2011

Correcting Semantic Collocation Errors with L1-induced Paraphrases

Daniel Dahlmeier; Hwee Tou Ng

Collaboration


Dive into the Daniel Dahlmeier's collaboration.

Top Co-Authors

Avatar

Hwee Tou Ng

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Sinno Jialin Pan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Wenya Wang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Chang Liu

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Wee Sun Lee

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Eric Jun Feng Ng

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Siew Mei Wu

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Thanh Phu Tran

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Wei Lu

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge