Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Beracah Yankama is active.

Publication


Featured researches published by Beracah Yankama.


IEEE Transactions on Biomedical Engineering | 2011

Multiscale Mathematical Modeling to Support Drug Development

David Nordsletten; Beracah Yankama; Renato Umeton; V. V. S. Ayyadurai; C.F. Dewey

It is widely recognized that major improvements are required in the methods currently being used to develop new therapeutic drugs. The time from initial target identification to commercialization can be 10-14 years and incur a cost in the hundreds of millions of dollars. Even after substantial investment, only 30-40% of the candidate compounds entering clinical trials are successful. We propose that multiscale mathematical pathway modeling can be used to decrease time required to bring candidate drugs to clinical trial and increase the probability that they will be successful in humans. The requirements for multiple time scales and spatial scales are discussed, and new computational paradigms are identified to address the increased complexity of modeling.


Cognitive Aspects of Computational Language Acquisition | 2013

Treebank Parsing and Knowledge of Language

Sandiway Fong; Igor Malioutov; Beracah Yankama

Over the past 15 years, there has great success in using linguistically annotated sentence collections, such as the Penn Treebank (PTB), to construct statistically based parsers. This success leads naturally to the question of the extent to which such systems acquire full “knowledge of language” in a conventional linguistic sense. This chapter addresses this question. It assesses the knowledge attained by several current statistically-trained parsers in the area of tense marking, questions, English passives, and the acquisition of “unnatural” language constructions, extending previous results that boosting training data via targeted examples can, in certain cases, improve performance, but also indicating that such systems may be too powerful, in the sense that they can learn “unnatural” language patterns. Going beyond this, this chapter advances a general approach to incorporate linguistic knowledge by means of “linguistic regularization” to canonicalize predicate-argument structure, and so improve statistical training and parser performance.


The Linguistic Review | 2018

Colorless green ideas do sleep furiously: gradient acceptability and the nature of the grammar

Jon Sprouse; Beracah Yankama; Sagar Indurkhya; Sandiway Fong

Abstract In their recent paper, Lau, Clark, and Lappin explore the idea that the probability of the occurrence of word strings can form the basis of an adequate theory of grammar (Lau, Jey H., Alexander Clark & 15 Shalom Lappin. 2017. Grammaticality, acceptability, and probability: A prob- abilistic view of linguistic knowledge. Cognitive Science 41(5):1201–1241). To make their case, they present the results of correlating the output of several probabilistic models trained solely on naturally occurring sentences with the gradient acceptability judgments that humans report for ungrammatical sentences derived from roundtrip machine translation errors. In this paper, we first explore the logic of the Lau et al. argument, both in terms of the choice of evaluation metric (gradient acceptability), and in the choice of test data set (machine translation errors on random sentences from a corpus). We then present our own series of studies intended to allow for a better comparison between LCL’s models and existing grammatical theories. We evaluate two of LCL’s probabilistic models (trigrams and recurrent neural network) against three data sets (taken from journal articles, a textbook, and Chomsky’s famous colorless-green-ideas sentence), using three evaluation metrics (LCL’s gradience metric, a categorical version of the metric, and the experimental-logic metric used in the syntax literature). Our results suggest there are very real, measurable cost-benefit tradeoffs inherent in LCL’s models across the three evaluation metrics. The gain in explanation of gradience (between 13% and 31% of gradience) is offset by losses in the other two metrics: a 43%-49% loss in coverage based on a categorical metric of explaining acceptability, and a loss of 12%-35% in explaining experimentally-defined phenomena. This suggests that anyone wishing to pursue LCL’s models as competitors with existing syntactic theories must either be satisfied with this tradeoff, or modify the models to capture the phenomena that are not currently captured.


Cognitive Science | 2011

Poverty of the Stimulus Revisited

Paul M. Pietroski; Beracah Yankama; Noam Chomsky


language resources and evaluation | 2012

A large scale annotated child language construction database

Aline Villavicencio; Beracah Yankama; Marco Idiart


Proceedings of the Workshop on Computational Models of Language Acquisition and Loss | 2012

Get out but don't fall down: verb-particle constructions in child language

Aline Villavicencio; Marco Idiart; Carlos Ramisch; Vitor De Araujo; Beracah Yankama


Umeton | 2010

OREMP: Ontology Reasoning Engine for Molecular Pathways

Renato Umeton; Beracah Yankama; Giuseppe Nicosia; C. Forbes Dewey


Archive | 2010

A Cross-Format Framework for Consistent Information Integration among Molecular Pathways and Ontologies

Renato Umeton; Beracah Yankama; Giuseppe Nicosia; C.F. Dewey


Elsevier | 2013

In Silico Modeling of Shear-Stress-Induced Nitric Oxide Production in Endothelial Cells through Systems Biology

Andrew Koo; David Nordsletten; Renato Umeton; Beracah Yankama; V. A. Shiva Ayyadurai; C. Forbes Dewey; Guillermo García-Cardeña


Proceedings of the Workshop on Computational Models of Language Acquisition and Loss | 2012

An annotated English child language database

Aline Villavicencio; Beracah Yankama; Rodrigo Wilkens; Marco Idiart

Collaboration


Dive into the Beracah Yankama's collaboration.

Top Co-Authors

Avatar

Renato Umeton

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shiva Ayyadurai

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aline Villavicencio

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Marco Idiart

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Koo

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

C. Forbes Dewey

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

C.F. Dewey

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sandiway Fong

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge