Research

What we can learn depends on what we already know; a child who can’t count cannot learn arithmetic. Language acquisition, like learning in general, is incremental. I’m interested in how children gradually grow their grammars, by building on prior knowledge in order to learn from their data. I use computational and behavioral tools to study this process in the first stages of syntax and semantics acquisition.

Modeling Development in Language Acquisition

At each point in development, children represent their linguistic input using the knowledge of their language that they currently have available— their developing grammar—together with their other developing cognitive abilities. Their immature input representations are the data that they use to gain new knowledge about their grammar. With that new knowledge, they can represent their input more richly and draw further generalizations, until they incrementally converge on the correct theory of their language.

Understanding this process requires us to specify how learners represent their input before they have acquired their target grammar, and what learning mechanisms enable them to learn from immature input representations. I use behavioral experiments to investigate children’s sentence representations at very early stages of grammatical development, in infancy. I combine these experiments with probabilistic computational models in order to test specific hypotheses about infants’ representational and learning resources, allowing us to evaluate their relative contributions in a particular learning domain.

Non-local Dependencies and Argument Structure

Wh-dependencies, and other phenomena that move subjects and objects from their canonical clause positions, introduce a challenge in early grammar acquisition. Because these dependencies take various forms across languages, learners must discover what they look like in the particular language they are exposed to. For instance, English learners must come to recognize that there is a fronted object in the question What are you holding?, and that it is a wh-phrase. Before they have this knowledge, the frequent presence of wh-questions in their input, as well as other non-canonical sentence types, might lead children to draw faulty inferences– both about where arguments can and cannot occur in their language, and the specific argument-taking properties of particular verbs.

Using behavioral looking- and listening-time methods, my collaborators and I study how infants in their second year of life represent and process these types of non-local argument dependencies. Using computational methods, I model how infants’ verb argument structure knowledge interacts with their acquisition of non-basic clause types, and how they might “filter” messy data from these types of sentences for the purposes of learning verb argument structure and basic clause structure. Collaborators: Anne Christophe, Naomi Feldman, Mina Hirzel, Tim Hunter, Jeffrey Lidz.

Representing Clause Arguments for Verb Learning

Infants use a verb’s distributions with subjects and objects in order to draw inferences about its meaning. My collaborators and I are interested in how infants relate clause arguments to event participants when drawing these inferences. We ask whether infants primarily expect arguments and participants to match in number, or whether they more flexibly link particular argument and participant relations, e.g. transitive subject to agent and object to patient. This has implications for our theories of early bootstrapping and clause structure development. If infants are able to differentiate subjects and objects in a clause, and in a format that allows for adult-like inferences about meaning, this invites further investigation into how richly they represent those grammatical relations and how they identify them in sentences of their language.

Using preferential looking and habituation-based tasks, my collaborators and I are asking (1) assess how infants view particular events in the world, independent of language, and (2) investigate how they relate particular sentence representations to particular scene representations in order to infer the meanings of new verbs. In related computational and corpus work, I’m asking how infants might identify the ways that core clause arguments are expressed in their language, and how this would interact with their acquisition of verb meanings. I’m currently studying these questions in English, French, and Spanish. Collaborators: Anne Christophe, Angela Xiaoxue He, Nina Hyams, Mina Hirzel, Tyler Knowlton, Jeffrey Lidz, Victoria Mateu, Alexander Williams.