I am an assistant professor in Natural Language Processing at the University of Edinburgh and a visiting professor at NVIDIA.
If you are a prospective student or industry partner please visit the FAQ page for more information.
Previously, I was a visiting postdoctoral scholar at Stanford University and a postdoctoral fellow in computer science at Mila - Quebec AI Institute in Montreal. In 2021, I obtained a PhD from the University of Cambridge, St John’s College. Once upon a time I studied typological and historical linguistics at the University of Pavia (deep in my heart, I am still a humanist).
My research has been featured on the Economist and Scientific American. The main research foci of my lab are:
modular deep learning: I am interested in designing neural architectures that route information to specialised modules (e.g., sparse subnetworks). This facilitates systematic generalisation and conditional computation. It also helps control the model’s behaviour, e.g., mitigating its hallucinations.
efficient architectures: The goal is to make language models more efficient by compressing their intermediate representations and memory. As a side effect, this also makes them break free from tokenizers and learn hierarchical abstractions over raw data.
computational typology: I wish to understand how language varies, across the world and its cultures, with the help of computational tools. Multimodal models in particular give us an powerful tool to study how form depends on grounded, embodied representations of meaning and function.
My research earned a Google Research Faculty Award. I received 2 Best Paper Awards at EMNLP 2021 and RepL4NLP 2019. I am a board member of SIGTYP, the ACL special interest group for computational typology, a member of the European Lab for Learning and Intelligent Systems (ELLIS), and part of the TACL journal editorial team.