Peter Schüller
Marmara University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Schüller.
Artificial Intelligence | 2014
Thomas Eiter; Michael Fink; Peter Schüller; Antonius Weinzierl
We provide two approaches for explaining inconsistency in multi-context systems, where decentralized and heterogeneous system parts interact via nonmonotonic bridge rules. Inconsistencies arise easily in such scenarios, and nonmonotonicity calls for specific methods of inconsistency analysis. Both our approaches characterize inconsistency in terms of involved bridge rules: either by pointing out rules which need to be altered for restoring consistency, or by finding combinations of rules which cause inconsistency. We show duality and modularity properties, give precise complexity characterizations, and provide algorithms for computation using HEX-programs. Our results form a basis for inconsistency management in heterogeneous knowledge integration systems.
Journal of Artificial Intelligence Research | 2014
Thomas Eiter; Michael Fink; Christoph Redl; Peter Schüller
HEX-programs extend logic programs under the answer set semantics with external computations through external atoms. As reasoning from ground Horn programs with nonmonotonic external atoms of polynomial complexity is already on the second level of the polynomial hierarchy, minimality checking of answer set candidates needs special attention. To this end, we present an approach based on unfounded sets as a generalization of related techniques for ASP programs. The unfounded set detection is expressed as a propositional SAT problem, for which we provide two different encodings and optimizations to them. We then integrate our approach into a previously developed evaluation framework for HEX-programs, which is enriched by additional learning techniques that aim at avoiding the reconstruction of the same or related unfounded sets. Furthermore, we provide a syntactic criterion that allows one to skip the minimality check in many cases. An experimental evaluation shows that the new approach significantly decreases runtime.
Ai Communications | 2016
Esra Erdem; Volkan Patoglu; Peter Schüller
We provide a systematic analysis of levels of integration between discrete high-level reasoning and continuous low-level feasibility checks to address hybrid planning problems in robotic applications. We identify four distinct strategies for such an integration: (i) low-level checks are done for all possible cases in advance and then this information is used during plan generation; (ii) low-level checks are done exactly when they are needed during the search for a plan; (iii) low-level checks are done after a plan is computed, and then a new plan is computed if the plan is found infeasible; (iv) similar to the previous strategy of replanning but a new plan is computed subject to the constraints obtained from previous low-level checks. We perform experiments on hybrid planning problems in housekeeping domain considering these four methods of integration, as well as some of their combinations. We analyze the usefulness of different levels of integration in this domain, both from the point of view of computational efficiency (in time and space) and from the point of view of plan quality relative to its feasibility. We discuss advantages and disadvantages of each strategy in the light of experimental results.
Theory and Practice of Logic Programming | 2013
Esra Erdem; Volkan Patoglu; Zeynep Gozen Saribatur; Peter Schüller; Tansel Uras
We study the problem of finding optimal plans for multiple teams of robots through a mediator, where each team is given a task to complete in its workspace on its own and where teams are allowed to transfer robots between each other, subject to the following constraints: 1) teams (and the mediator) do not know about each others workspace or tasks (e.g., for privacy purposes); 2) every team can lend or borrow robots, but not both (e.g., transportation/calibration of robots between/for different workspaces is usually costly). We present a mathematical definition of this problem and analyze its computational complexity. We introduce a novel, logic-based method to solve this problem, utilizing action languages and answer set programming for representation, and the state-of-the-art ASP solvers for reasoning. We show the applicability and usefulness of our approach by experiments on various scenarios of responsive and energy-efficient cognitive factories.
international conference on logic programming | 2013
Michael Fink; Stefano Germano; Giovambattista Ianni; Christoph Redl; Peter Schüller
acthex programs are a convenient tool for connecting stateful external environments to logic programs. In the acthex framework, actual actions on an external environment can be declaratively selected, rearranged, scheduled and then executed depending on intelligence specified in an ASP-based language. We report in this paper about recent improvements of the formal and of the operational acthex programming framework. Besides yielding a significant increase in versatility of the framework, we also present illustrative application showcases and a short evaluation thereof exhibiting computational acthex strengths.
north american chapter of the association for computational linguistics | 2016
Mishal Kazmi; Peter Schüller
In this paper we present our system developed for the SemEval 2016 Task 2 Interpretable Semantic Textual Similarity along with the results obtained for our submitted runs. Our system participated in the subtasks predicting chunk similarity alignments for gold chunks as well as for predicted chunks. The Inspire system extends the basic ideas from last years participant NeRoSim, however we realize the rules in logic programming and obtain the result with an Answer Set Solver. To prepare the input for the logic program, we use the PunktTokenizer, Word2Vec, and WordNet APIs of NLTK, and the POSand NER-taggers from Stanford CoreNLP. For chunking we use a joint POS-tagger and dependency parser and based on that determine chunks with an Answer Set Program. Our system ranked third place overall and first place in the Headlines gold chunk subtask.
international conference on logic programming | 2013
Peter Schüller
Combinatory Categorial Grammar CCG is a grammar formalism used for natural language parsing. CCG assigns structured lexical categories to words and uses a small set of combinatory rules to combine these categories in order to parse sentences. In this work we describe and implement a new approach to CCG parsing that relies on Answer Set Programming ASP -- a declarative programming paradigm.Different from previous work, we present an encoding that is inspired by the algorithm due to Cocke, Younger, and Kasami CYK. We also show encoding extensions for parse tree normalization and best-effort parsing and outline possible future extensions which are possible due to the usage of ASP as computational mechanism. We analyze performance of our approach on a part of the Brown corpus and discuss lessons learned during experiments with the ASP tools dlv, gringo, and clasp. The new approach is available in the open source CCG parsing toolkit AspCcgTk which uses the C&C supertagger as a preprocessor to achieve wide-coverage natural language parsing.
Expert Systems With Applications | 2017
Mishal Kazmi; Peter Schüller; Yücel Saygin
Abstract Inductive Logic Programming (ILP) combines rule-based and statistical artificial intelligence methods, by learning a hypothesis comprising a set of rules given background knowledge and constraints for the search space. We focus on extending the XHAIL algorithm for ILP which is based on Answer Set Programming and we evaluate our extensions using the Natural Language Processing application of sentence chunking. With respect to processing natural language, ILP can cater for the constant change in how we use language on a daily basis. At the same time, ILP does not require huge amounts of training examples such as other statistical methods and produces interpretable results, that means a set of rules, which can be analysed and tweaked if necessary. As contributions we extend XHAIL with (i) a pruning mechanism within the hypothesis generalisation algorithm which enables learning from larger datasets, (ii) a better usage of modern solver technology using recently developed optimisation methods, and (iii) a time budget that permits the usage of suboptimal results. We evaluate these improvements on the task of sentence chunking using three datasets from a recent SemEval competition. Results show that our improvements allow for learning on bigger datasets with results that are of similar quality to state-of-the-art systems on the same task. Moreover, we compare the hypotheses obtained on datasets to gain insights on the structure of each dataset.
international conference on logic programming | 2013
Marcello Balduccini; Yuliya Lierler; Peter Schüller
Answer set programming ASP is a declarative programming paradigm stemming from logic programming that has been successfully applied in various domains. Despite amazing advancements in ASP solving, many applications still pose a challenge that is commonly referred to as grounding bottleneck. Devising, implementing, and evaluating a method that alleviates this problem for certain application domains is the focus of this paper. The proposed method is based on combining backtracking-based search algorithms employed in answer set solvers with SLDNF resolution from prolog. Using prolog inference on non-ground portions of a given program, both grounding time and the size of the ground program can be substantially reduced.
Journal of Experimental and Theoretical Artificial Intelligence | 2018
Peter Schüller
Abstract We describe the first automatic approach for merging coreference annotations obtained from multiple annotators into a single gold standard. This merging is subject to certain linguistic hard constraints and optimisation criteria that prefer solutions with minimal divergence from annotators. The representation involves an equivalence relation over a large number of elements. We use Answer Set Programming to describe two representations of the problem and four objective functions suitable for different data-sets. We provide two structurally different real-world benchmark data-sets based on the METU-Sabanci Turkish Treebank and we report our experiences in using the Gringo, Clasp and Wasp tools for computing optimal adjudication results on these data-sets.