Exploring the mechanisms that underpin language

How do children acquire a productive language system? Traditional approaches assume this involves acquiring symbolic rules operating over discrete word categories, which has been argued to be “unlearnable” unless the principles of language are innate. In contrast, Liz’s work using artificial languages methods (i.e. exposing participants to experimenter-designed miniature languages and testing their learning) has suggested that linguistic productivity is a direct function of language input (e.g., Wonnacott, E. et al. (2012), Wonnacott, E. (2011), Wonnacott, E., Brown, H., & Nation, K. (2017), Samara, A., Smith, K., Brown, H., & Wonnacott, E. (2017). These findings can be described in terms of a balance between “item-based” learning – where structures remain associated with particular words – and generalisation. This interpretation nicely fits with Perfors, A., Tenenbaum, J.B., & Wonnacott, E. (2010)’s computational work and broadly within a statistical learning framework. However, the explanatory value is limited in that there is no clear account of the underlying mechanisms of learning. 

In 2019 Liz obtained a Leverhulme Trust Research Project Grant which explores a perspective whereby learning results from environmental cues reducing uncertainty about outcomes. For language, the “outcomes” are a system of linguistic form contrasts, “cues” come from the world and earlier parts of an utterance, and “learning” is dissociating the (huge) set of uninformative cues and mastering the system. This work is conducted in collaboration with  Michael Ramscar and  Holly Jenkins is currently a postdoc on the project. The approach is learning-theory inspired and uses Naive discriminative learning (NDL) models in order to make predictions about how outcomes are predicted based on the cues available. NDL in combination with Artificial Language Learning experiments allow us to test key predictions about the effects of the input’s distribution and linear order on generalisation. 

 

Papers 

• Viviani, E., Ramscar, M., Wonnacott, E., (2024) The effects of linear order in category learning: Some replications of Ramscar et al., (2010) and their implications for replicating training studies. Cognitive Science : http://doi.org/10.1111/cogs.13445

• Kemper, S. S., Jenkins, H. E., Wonnacott, E., & Ramscar, M. (2024). Rethinking Probabilities: Why Corpus Frequencies Cannot Capture Speakers' Dynamic Linguistic Behavior. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 46). 

• Samara, A., Wonnacott, E., Saxena, G., Maitreyee, R., Fazekas, J., & Ambridge, B. (2024). Learners restrict their linguistic generalizations using preemption but not entrenchment: Evidence from artificial-language-learning studies with adults and children. Psychological Review. 

• Viviani, E., Ramscar, M., Wonnacott, E., Go above and beyond: Does input variability affect children’s ability to learn spatial adpositions in a novel language? Stage 1 manuscript recommended at PCI RR, June 2022 OSF repository Registered report 

• Vujović, M., Ramscar, M., & Wonnacott, E. (2021). Language learning as uncertainty reduction: The role of prediction error in linguistic generalization and item-learning. Journal of Memory and Language. https://doi.org/10.1016/j.jml.2021.104231

 

Language change and evolution 

A related line of work, in collaboration with Prof Kenny Smith (Edinburgh) and Dr Olga Fehrer (Warwick) has used artificial language learning to explore the process of language change and evolution.  By using an used an iterated artificial language paradigm (whereby each learner “teaches” the next in a chain of language transmission; (Smith & Wonnacott 2010) and through paradigms in which learners use artificial language in communication games (Feher, Wonnacott & Smith 2016) we have shown how processes of transmission and interaction can augment weak biases at the level of individual learners, speaking to longstanding questions about the origins of “universals” in human languages 

 

Researchers in the Language Learning Lab:

Holly Jenkins 

Eva Viviani 
Elizabeth Wonnacott 

 

Alumni:

Anna Samara (former postdoc, now collaborator) 
Catriona Silvey 
Maša Vujović 
Elena Zamfir