Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shalom Lappin is active.

Publication


Featured researches published by Shalom Lappin.


Language | 1997

The handbook of contemporary semantic theory

Shalom Lappin

Notes on Contributors. Preface. Introduction. Part I: Formal Semantics in Linguistics: . 1. The Development of Formal Semantics in Linguistic Theory: Barbara H. Partee. Part II: Generalized Quantifier Theory:. 2. The Semantics of Determiners: Edward L. Keenan. 3. The Role of Situations in Generalized Quantifiers: Robin Cooper. Part III: The Interface Between Syntax and Semantics:. 4. The Syntax/ Semantics Interface in Categorial Grammar: Pauline Jacobson. 5. Anaphora and Identity: Robert Fiengo and Robert May. 6. The Interpretation of Ellipsis: Shalom Lappin. Part IV: Anaphora, Discourse and Modality:. 7. Coreference and Modality: Jeroen Groenendijk, Martin Stokhof and Frank Veltman. 8. Anaphora in Intensional Contexts:: Craige Roberts. 9. Quantification, Quantificational Domains and Dynamic Logic: Jean Mark Gawron. Part V: Focus, Presupposition and Negation:. 10. Focus: Mats Rooth. 11. Presupposition and Implicature: Laurence R. Horn. 12. Negation and Polarity Items: William A. Ladusaw. Part VI: Tense:. 13. Tense and Modality: Murvet Enc. Part VII: Questions:. 14. The Semantics of Questions: James Higginbotham. 15. Interrogatives: Questions, Facts and Dialogue: Jonathan Ginzburg. Part VIII: Plurals:. 16. Plurality: Fred Landman. Part IX: Computational Semantics:. 17. Computational Semantics - Linguistics and Processing: John Nerbonne. Part X: Lexical Semantics:. 18. Lexical Semantics and Syntactic Structure: Beth Levin and Malka Rappaport Hovav. Part XI: Semantics and Related Domains: . 19. Semantics and Logic: Gila Y. Sher. 20. Semantics and Cognition: Ray Jackendoff. 21. Semantics, Pragmatics, and Natural-Language Interpretation: Ruth M. Kempson. 22. Semantics in Linguistics and Philosophy: An Intentionalist Perspective: Jerrold J. Katz. References. Index.


Journal of Linguistics | 2007

Machine learning theory and practice as a source of insight into universal grammar.

Shalom Lappin; Stuart M. Shieber

In this paper, we explore the possibility that machine learning approaches to naturallanguage processing (NLP) being developed in engineering-oriented computational linguistics (CL) may be able to provide specific scientific insights into the nature of human language. We argue that, in principle, machine learning (ML) results could inform basic debates about language, in one area at least, and that in practice, existing results may offer initial tentative support for this prospect. Further, results from computational learning theory can inform arguments carried on within linguistic theory as well.


Computational Linguistics | 2001

Introduction to the special issue on computational anaphora resolution

Ruslan Mitkov; Shalom Lappin; Branimir Boguraev

Anaphora accounts for cohesion in texts and is a phenomenon under active study in formal and computational linguistics alike. The correct interpretation of anaphora is vital for natural language processing (NLP). For example, anaphora resolution is a key task in natural language interfaces, machine translation, text summarization, information extraction, question answering, and a number of other NLP applications. After considerable initial research, followed by years of relative silence in the early 1980s, anaphora resolution has attracted the attention of many researchers in the last 10 years and a great deal of successful work on the topic has been carried out. Discourseoriented theories and formalisms such as Discourse Representation Theory and Centering Theory inspired new research on the computational treatment of anaphora. The drive toward corpus-based robust NLP solutions further stimulated interest in alternative and/or data-enriched approaches. Last, but not least, application-driven research in areas such as automatic abstracting and information extraction independently highlighted the importance of anaphora and coreference resolution, boosting research in this area. Much of the earlier work in anaphora resolution heavily exploited domain and linguistic knowledge (Sidner 1979; Carter 1987; Rich and LuperFoy 1988; Carbonell and Brown 1988), which was difficult both to represent and to process, and which required considerable human input. However, the pressing need for the development of robust and inexpensive solutions to meet the demands of practical NLP systems encouraged many researchers to move away from extensive domain and linguistic knowledge and to embark instead upon knowledge-poor anaphora resolution strategies. A number of proposals in the 1990s deliberately limited the extent to which they relied on domain and/or linguistic knowledge and reported promising results in knowledge-poor operational environments (Dagan and Itai 1990, 1991; Lappin and Leass 1994; Nasukawa 1994; Kennedy and Boguraev 1996; Williams, Harvey, and Preston 1996; Baldwin 1997; Mitkov 1996, 1998b). The drive toward knowledge-poor and robust approaches was further motivated by the emergence of cheaper and more reliable corpus-based NLP tools such as partof-speech taggers and shallow parsers, alongside the increasing availability of corpora and other NLP resources (e.g., ontologies). In fact, the availability of corpora, both raw and annotated with coreferential links, provided a strong impetus to anaphora resolu


international conference on computational linguistics | 2004

Classifying ellipsis in dialogue: a machine learning approach

Raquel Fernández; Jonathan Ginzburg; Shalom Lappin

This paper presents a machine learning approach to bare sluice disambiguation in dialogue. We extract a set of heuristic principles from a corpus-based sample and formulate them as probabilistic Horn clauses. We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. Both learners perform well, yielding similar success rates of approx 90%. The results show that the features in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features.


conference of the european chapter of the association for computational linguistics | 2014

A Probabilistic Rich Type Theory for Semantic Interpretation

Robin Cooper; Simon Dobnik; Shalom Lappin; Staffan Larsson

We propose a probabilistic type theory in which a situations is judged to be of a typeT with probabilityp. In addition to basic and functional types it includes, inter alia, record types and a notion of typing based on them. The type system is intensional in that types of situations are not reduced to sets of situations. We specify the fragment of a compositional semantics in which truth conditions are replaced by probability conditions. The type system is the interface between classifying situations in perception and computing the semantic interpretations of phrases in natural language.


Archive | 2011

Linguistic Nativism and the Poverty of the Stimulus: Clark/Linguistic Nativism and the Poverty of the Stimulus

Alexander Clark; Shalom Lappin

Preface. 1 Introduction: Nativism in Linguistic Theory. 1.1 Historical Development. 1.2 The Rationalist-Empiricist Debate. 1.3 Nativism and Cognitive Modularity. 1.4 Connectionism, Nonmodularity, and Antinativism. 1.5 Adaptation and the Evolution of Natural Language. 1.6 Summary and Conclusions. 2 Clarifying the Argument from the Poverty of the Stimulus. 2.1 Formulating the APS. 2.2 Empiricist Learning versus Nativist Learning. 2.3 Our Version of the APS. 2.4 A Theory-Internal APS. 2.5 Evidence for the APS: Auxiliary Inversion as a Paradigm Case. 2.6 Debate on the PLD. 2.7 Learning Theory and Indispensable Data. 2.8 A Second Empirical Case: Anaphoric One. 2.9 Summary and Conclusions. 3 The Stimulus: Determining the Nature of Primary Linguistic Data. 3.1 Primary Linguistic Data. 3.2 Negative Evidence. 3.3 Semantic, Contextual, and Extralinguistic Evidence. 3.4 Prosodic Information. 3.5 Summary and Conclusions. 4 Learning in the Limit: The Gold Paradigm. 4.1 Formal Models of Language Acquisition. 4.2 Mathematical Models of Learnability. 4.3 The Gold Paradigm of Learnability. 4.4 Critique of the Positive-Evidence-Only APS in IIL. 4.5 Proper Positive Results. 4.6 Variants of the Gold Model. 4.7 Implications of Golds Results for Linguistic Nativism. 4.8 Summary and Conclusions. 5 Probabilistic Learning Theory for Language Acquisition. 5.1 Chomskys View of Statistical Learning. 5.2 Basic Assumptions of Statistical Learning Theory. 5.3 Learning Distributions. 5.4 Probabilistic Versions of the IIL Framework. 5.5 PAC Learning. 5.6 Consequences of PAC Learnability. 5.7 Problems with the Standard Model. 5.8 Summary and Conclusions. 6 A Formal Model of Indirect Negative Evidence. 6.1 Introduction. 6.2. From Low Probability to Ungrammaticality. 6.3 Modeling the DDA. 6.4 Applying the Functional Lower Bound. 6.5 Summary and Conclusions. 7 Computational Complexity and Efficient Learning. 7.1 Basic Concepts of Complexity 7.2 Efficient Learning. 7.3 Negative Results. 7.4 Interpreting Hardness Results. 7.5 Summary and Conclusions. 8 Positive Results in Efficient Learning. 8.1 Regular Languages. 8.2 Distributional Methods. 8.3 Distributional Learning of Context-Free Languages. 8.4 Lattice-Based Formalisms. 8.5 Arguments against Distributional Learning. 8.6 Summary and Conclusions. 9 Grammar Induction through Implemented Machine Learning. 9.1 Supervised Learning. 9.2Unsupervised Learning. 9.3 Summary and Conclusions. 10 Parameters in Linguistic Theory and Probabilistic Language Models. 10.1 Learnability of Parametric Models of Syntax. 10.2 UG Parameters and Language Variation. 10.3 Parameters in Probabilistic Language Models. 10.4 Inferring Constraints on Hypothesis Spaces with Hierarchical Bayesian Models. 10.5 Summary and Conclusions. 11 A Brief Look at Some Biological and Psychological Evidence. 11.1 Developmental Arguments. 11.2 Genetic Factors: Inherited Language Disorders. 11.3 Experimental Learning of Artificial Languages. 11.4 Summary and Conclusions. 12 Conclusion. 12.1 Summary. 12.2 Conclusions. References. Author Index. Subject Index.


Applied Artificial Intelligence | 1995

SYNTAX AND LEXICAL STATISTICS IN ANAPHORA RESOLUTION

Ido Dagan; John S. Justeson; Shalom Lappin; Herbert J. Leass; Amnon Ribak

We describe a syntactically based salience algorithm for pronominal anaphora resolution and a procedure for reevaluating the decisions of the algorithm on the basis of statistically modeled lexical semantic/pragmatic preferences. We report the results of an extensive blind test of both systems on computer manual text. We discuss the implications of these results for the comparative roles of syntactically defined salience and statistically measured lexical preference in determining the references of pronouns in text.


Archive | 2010

The Handbook of Computational Linguistics and Natural Language Processing: Clark/The Handbook of Computational Linguistics and Natural Language Processing

Alexander Clark; Chris Fox; Shalom Lappin

The Handbook of Computational Linguistics and Natural Language Processing provides a comprehensive overview of the concepts, methodologies, and applications being undertaken today in computational linguistics and natural language processing. The work begins with an introduction to the major theoretical issues in these fields, as well as the central engineering applications that the work has produced. Also included is a detailed synopsis of the most cutting edge research. The major developments in this dynamic field are presented in an accessible way that explains the close connection between scientific understanding of the computational properties of natural language and the creation of effective language technologies. The Handbook serves as an invaluable state-of-the-art reference source for computational linguists and software engineers developing natural language applications in industrial research and development labs of software companies, as well as for graduate students and researchers in computer science, linguistics, psychology, philosophy, and mathematics working within computational linguistics.


Cognitive Science | 2017

Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge

Jey Han Lau; Alexander Clark; Shalom Lappin

The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary and probabilistic theories of grammaticality. These judgements are gradient in nature, and so cannot be directly accommodated in a binary formal grammar. However, it is also not possible to simply reduce acceptability to probability. The acceptability of a sentence is not the same as the likelihood of its occurrence, which is, in part, determined by factors like sentence length and lexical frequency. In this paper, we present the results of a set of large-scale experiments using crowd-sourced acceptability judgments that demonstrate gradience to be a pervasive feature in acceptability judgments. We then show how one can predict acceptability judgments on the basis of probability by augmenting probabilistic language models with an acceptability measure. This is a function that normalizes probability values to eliminate the confounding factors of length and lexical frequency. We describe a sequence of modeling experiments with unsupervised language models drawn from state-of-the-art machine learning methods in natural language processing. Several of these models achieve very encouraging levels of accuracy in the acceptability prediction task, as measured by the correlation between the acceptability measure scores and mean human acceptability values. We consider the relevance of these results to the debate on the nature of grammatical competence, and we argue that they support the view that linguistic knowledge can be intrinsically probabilistic.


Philosophy of Linguistics | 2010

Computational Learning Theory and Language Acquisition

Alexander Clark; Shalom Lappin

Computational learning theory explores the limits of learnability. Studying language acquisition from this perspective involves identifying classes of languages that are learnable from the available data, within the limits of time and computational resources available to the learner. Different models of learning can yield radically different learnability results, where these depend on the assumptions of the model about the nature of the learning process, and the data, time, and resources that learners have access to. To the extent that such assumptions accurately reflect human language learning, a model that invokes them can offer important insights into the formal properties of natural languages, and the way in which their representations might be efficiently acquired. In this chapter we consider several computational learning models that have been applied to the language learning task. Some of these have yielded results that suggest that the class of natural languages cannot be efficiently learned from the primary linguistic data (PLD) available to children, through

Collaboration


Dive into the Shalom Lappin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge