Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer Sleeman is active.

Publication


Featured researches published by Jennifer Sleeman.


international conference on computational linguistics | 2014

Meerkat Mafia: Multilingual and Cross-Level Semantic Textual Similarity Systems

Abhay L. Kashyap; Lushan Han; Roberto Yus; Jennifer Sleeman; Taneeya W. Satyapanich; Sunil Gandhi; Tim Finin

We describe UMBC’s systems developed for the SemEval 2014 tasks on Multilingual Semantic Textual Similarity (Task 10) and Cross-Level Semantic Similarity (Task 3). Our best submission in the Multilingual task ranked second in both English and Spanish subtasks using an unsupervised approach. Our best systems for Cross-Level task ranked second in Paragraph-Sentence and first in both Sentence-Phrase and Word-Sense subtask. The system ranked first for the PhraseWord subtask but was not included in the official results due to a late submission.


Ai Magazine | 2015

Entity Type Recognition for Heterogeneous Semantic Graphs

Jennifer Sleeman; Tim Finin; Anupam Joshi

We describe an approach for identifying fine-grained entity types in heterogeneous data graphs that is effective for unstructured data or when the underlying ontologies or semantic schemas are unknown. Identifying fine-grained entity types, rather than a few high-level types, supports coreference resolution in heterogeneous graphs by reducing the number of possible coreference relations that must be considered. Big data problems that involve integrating data from multiple sources can benefit from our approach when the datas ontologies are unknown, inaccessible or semantically trivial. For such cases, we use supervised machine learning to map entity attributes and relations to a known set of attributes and relations from appropriate background knowledge bases to predict instance entity types. We evaluated this approach in experiments on data from DBpedia, Freebase, and Arnetminer using DBpedia as the background knowledge base.


language resources and evaluation | 2016

Robust semantic text similarity using LSA, machine learning, and linguistic resources

Abhay L. Kashyap; Lushan Han; Roberto Yus; Jennifer Sleeman; Taneeya W. Satyapanich; Sunil Gandhi; Tim Finin

Semantic textual similarity is a measure of the degree of semantic equivalence between two pieces of text. We describe the SemSim system and its performance in the *SEM 2013 and SemEval-2014 tasks on semantic textual similarity. At the core of our system lies a robust distributional word similarity component that combines latent semantic analysis and machine learning augmented with data from several linguistic resources. We used a simple term alignment algorithm to handle longer pieces of text. Additional wrappers and resources were used to handle task specific challenges that include processing Spanish text, comparing text sequences of different lengths, handling informal words and phrases, and matching words with sense definitions. In the *SEM 2013 task on Semantic Textual Similarity, our best performing system ranked first among the 89 submitted runs. In the SemEval-2014 task on Multilingual Semantic Textual Similarity, we ranked a close second in both the English and Spanish subtasks. In the SemEval-2014 task on Cross-Level Semantic Similarity, we ranked first in Sentence–Phrase, Phrase–Word, and Word–Sense subtasks and second in the Paragraph–Sentence subtask.


international conference on data engineering | 2012

Opaque Attribute Alignment

Jennifer Sleeman; Rafael Alonso; Hua Li; Art Pope; Antonio Badia

Ontology alignment describes a process of mapping ontological concepts, classes and attributes between different ontologies providing a way to achieve interoperability. While there has been considerable research in this area, most approaches that rely upon the alignment of attributes use labelbased string comparisons of property names. The ability to process opaque or non-interpreted attribute names is a necessary component of attribute alignment. We describe a new attribute alignment approach to support ontology alignment that uses the density estimation as a means for determining alignment among objects. Using the combination of similarity hashing, Kernel Density Estimation (KDE) and Cross entropy, we are able to show promising F-Measure scores using the standard Ontology Alignment Evaluation Initiative (OAEI) 2011 benchmark.


international semantic web conference | 2012

Online unsupervised coreference resolution for semi-structured heterogeneous data

Jennifer Sleeman

A pair of RDF instances are said to corefer when they are intended to denote the same thing in the world, for example, when two nodes of type foaf:Person describe the same individual. This problem is central to integrating and inter-linking semi-structured datasets. We are developing an online, unsupervised coreference resolution framework for heterogeneous, semi-structured data. The online aspect requires us to process new instances as they appear and not as a batch. The instances are heterogeneous in that they may contain terms from different ontologies whose alignments are not known in advance. Our framework encompasses a two-phased clustering algorithm that is both flexible and distributable, a probabilistic multidimensional attribute model that will support robust schema mappings, and a consolidation algorithm that will be used to perform instance consolidation in order to improve accuracy rates over time by addressing data spareness.


Big Data Challenges, Research, and Technologies in the Earth and Planetary Sciences Workshop, IEEE Int. Conf. on Big Data | 2016

Dynamic Topic Modeling to Infer the Influence of Research Citations on IPCC Assessment Reports

Jennifer Sleeman; Milton Halem; Tim Finin; Mark A. Cane

A common Big Data problem is the need to integrate large temporal data sets from various data sources into one comprehensive structure. Having the ability to correlate evolving facts between data sources can be especially useful in supporting a number of desired application functions such as inference and influence identification. As a real world application we use climate change publications based on the Intergovernmental Panel on Climate Change, which publishes climate change assessment reports every five years, with currently over 25 years of published content. Often these reports reference thousands of research papers. We use dynamic topic modeling as a basis for combining report and citation domains into one structure. We are able to correlate documents between the two domains to understand how the research has influenced the reports and how this influence has changed over time. In this use case, the topic report model used a total number of 410 documents and 5911 terms in the vocabulary while in the topic citations the vocabulary consisted of 25,154 terms and the number of documents was closer to 200,000 research papers.


American Geophysical Union Fall Meeting 2016 | 2016

Advanced Large Scale Cross Domain Temporal Topic Modeling Algorithms to Infer the Influence of Recent Research on IPCC Assessment Reports

Jennifer Sleeman; Milton Halem; Tim Finin; Mark A. Cane

One way of understanding the evolution of science within a particular scientific discipline is by studying the temporal influences that research publications had on that discipline. We provide a methodology for conducting such an analysis by employing cross-domain topic modeling and local cluster mappings of those publications with the historical texts to understand exactly when and how they influenced the discipline. We apply our method to the Intergovernmental Panel on Climate Change (IPCC) Assessment Reports and the citations therein. The IPCC reports were compiled by thousands of Earth scientists and the assessments were issued approximately every five years over a 30 year span, and includes over 200,000 research papers cited by these scientists.


Proceedings of the Third International Workshop on Social Data on the Web | 2010

Computing FOAF Co-reference Relations with Rules and Machine Learning

Jennifer Sleeman; Tim Finin


national conference on artificial intelligence | 2010

A Machine Learning Approach to Linking FOAF Instances

Jennifer Sleeman; Tim Finin


international semantic web conference | 2015

Topic Modeling for RDF Graphs

Jennifer Sleeman; Tim Finin; Anupam Joshi

Collaboration


Dive into the Jennifer Sleeman's collaboration.

Top Co-Authors

Avatar

Tim Finin

University of Maryland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lushan Han

University of Maryland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roberto Yus

University of Zaragoza

View shared research outputs
Top Co-Authors

Avatar

Antonio Badia

University of Louisville

View shared research outputs
Top Co-Authors

Avatar

Art Pope

Science Applications International Corporation

View shared research outputs
Researchain Logo
Decentralizing Knowledge