Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lorenzo Carlucci is active.

Publication


Featured researches published by Lorenzo Carlucci.


Topics in Cognitive Science | 2013

On the Necessity of U‐Shaped Learning

Lorenzo Carlucci; John Case

A U-shaped curve in a cognitive-developmental trajectory refers to a three-step process: good performance followed by bad performance followed by good performance once again. U-shaped curves have been observed in a wide variety of cognitive-developmental and learning contexts. U-shaped learning seems to contradict the idea that learning is a monotonic, cumulative process and thus constitutes a challenge for competing theories of cognitive development and learning. U-shaped behavior in language learning (in particular in learning English past tense) has become a central topic in the Cognitive Science debate about learning models. Antagonist models (e.g., connectionism versus nativism) are often judged on their ability of modeling or accounting for U-shaped behavior. The prior literature is mostly occupied with explaining how U-shaped behavior occurs. Instead, we are interested in the necessity of this kind of apparently inefficient strategy. We present and discuss a body of results in the abstract mathematical setting of (extensions of) Gold-style computational learning theory addressing a mathematically precise version of the following question: Are there learning tasks that require U-shaped behavior? All notions considered are learning in the limit from positive data. We present results about the necessity of U-shaped learning in classical models of learning as well as in models with bounds on the memory of the learner. The pattern emerges that, for parameterized, cognitively relevant learning criteria, beyond very few initial parameter values, U-shapes are necessary for full learning power! We discuss the possible relevance of the above results for the Cognitive Science debate about learning models as well as directions for future research.


Information & Computation | 2007

Results on memory-limited U-shaped learning

Lorenzo Carlucci; John Case; Sanjay Jain; Frank Stephan

U-shaped learning is a learning behaviour in which the learner first learns a given target behaviour, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of past-tenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the non-monotonicity of learning. Previous theory literature has studied whether or not U-shaped learning, in the context of Golds formal model of learning languages from positive data, is necessary for learning some tasks. It is clear that human learning involves memory limitations. In the present paper we consider, then, the question of the necessity of U-shaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of U-shaped learning in this memory-limited setting depends on delicate tradeoffs between the learners ability to remember its own previous conjecture, to store some values in its long-term memory, to make queries about whether or not items occur in previously seen data and on the learners choice of hypotheses space.


algorithmic learning theory | 2005

Non U-shaped vacillatory and team learning

Lorenzo Carlucci; John Case; Sanjay Jain; Frank Stephan

U-shaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is mostly concerned with whether U-shaped learning behaviour may be necessary in the abstract mathematical setting of inductive inference, that is, in the computational learning theory following the framework of Gold. All notions considered are learning from text, that is, from positive data. Previous work showed that U-shaped learning behaviour is necessary for behaviourally correct learning but not for syntactically convergent, learning in the limit (= explanatory learning). The present paper establishes the necessity for the whole hierarchy of classes of vacillatory learning where a behaviourally correct learner has to satisfy the additional constraint that it vacillates in the limit between at most k grammars, where k ≥ 1. Non U-shaped vacillatory learning is shown to be restrictive: Every non U-shaped vacillatorily learnable class is already learnable in the limit. Furthermore, if vacillatory learning with the parameter k=2 is possible then non U-shaped behaviourally correct learning is also possible. But for k=3, surprisingly, there is a class witnessing that this implication fails.


conference on learning theory | 2006

Memory-limited u-shaped learning

Lorenzo Carlucci; John Case; Sanjay Jain; Frank Stephan

U-shaped learning is a learning behaviour in which the learner first learns something, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of past-tenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the non-monotonicity of learning. Previous theory literature has studied whether or not U-shaped learning, in the context of Gold’s formal model of learning languages from positive data, is necessary for learning some tasks. It is clear that human learning involves memory limitations. In the present paper we consider, then, this question of the necessity of U-shaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of U-shaped learning in this memory-limited setting depends on delicate tradeoffs between the learner’s ability to remember its own previous conjecture, to store some values in its long-term memory, to make queries about whether or not items occur in previously seen data and on the learner’s choice of hypothesis space.


arXiv: Logic | 2011

Unprovability results involving braids

Lorenzo Carlucci; Patrick Dehornoy; Andreas Weiermann

We construct long sequences of braids that are descending with respect to the standard order of braids (‘Dehornoy order’), and we deduce that, contrary to all usual algebraic properties of braids, certain simple combinatorial statements involving the braid order are not provable in the subsystems ISigma_1 or ISigma_2 of the standard Peano system (although they are provable in stronger systems of arithmetic).


conference on computational complexity | 2011

Paris-Harrington Tautologies

Lorenzo Carlucci; Nicola Galesi; Massimo Lauria

We study the proof complexity of Paris-Harringtons Large Ramsey Theorem for bi-colorings of graphs. We prove a non-trivial conditional lower bound in Resolution and a quasi-polynomial upper bound in bounded-depth Frege. The lower bound is conditional on a (very reasonable) hardness assumption for a weak (quasi-polynomial) Pigeonhole principle in Res(2). We show that under such assumption, there is no refutation of the Paris-Harrington formulas of size quasi-polynomial in the number of propositional variables. The proof technique for the lower bound extends the idea of using a combinatorial principle to blow-up a counterexample for another combinatorial principle beyond the threshold of inconsistency. A strong link with the proof complexity of an unbalanced Ramsey principle for triangles is established. This is obtained by adapting some constructions due to Erdos and Mills.


Order | 2018

A Note on Hindman-Type Theorems for Uncountable Cardinals

Lorenzo Carlucci

Recent results of Hindman, Leader and Strauss and of Fernández-Bretón and Rinot showed that natural versions of Hindman’s Theorem fail for all uncontable cardinals. On the other hand, Komjáth proved a result in the positive direction, showing that there are arbitrarily large abelian groups satisfying some Hindman-type property. In this note we show how a family of natural Hindman-type theorems for uncountable cardinals can be obtained by adapting some recent results of the author from their original countable setting.


Archive for Mathematical Logic | 2018

A weak variant of Hindman’s Theorem stronger than Hilbert’s Theorem

Lorenzo Carlucci

Hirst investigated a natural restriction of Hindman’s Finite Sums Theorem—called Hilbert’s Theorem—and proved it equivalent over


conference on computability in europe | 2017

New Bounds on the Strength of Some Restrictions of Hindman’s Theorem

Lorenzo Carlucci; Leszek Aleksander Kołodziejczyk; Francesco Lepore; Konrad Zdanowski


conference on computability in europe | 2012

A note on ramsey theorems and turing jumps

Lorenzo Carlucci; Konrad Zdanowski

\mathbf {RCA}_0

Collaboration


Dive into the Lorenzo Carlucci's collaboration.

Top Co-Authors

Avatar

John Case

University of Delaware

View shared research outputs
Top Co-Authors

Avatar

Sanjay Jain

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Frank Stephan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicola Galesi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Konrad Zdanowski

Polish Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Massimo Lauria

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrey Bovykin

Steklov Mathematical Institute

View shared research outputs
Top Co-Authors

Avatar

Gyesik Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Francesco Lepore

Sapienza University of Rome

View shared research outputs
Researchain Logo
Decentralizing Knowledge