Publications

A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically …

The design of widespread vision-and-language datasets and pre-trained encoders directly adopts, or draws inspiration from, the concepts …

While achieving state-of-the-art results in multiple tasks and languages, translation-based cross-lingual transfer is often overlooked …

Fine-tuning all parameters of a pre-trained model has become the mainstream approach for transfer learning. To increase its efficiency …

In order to simulate human language capacity, natural language processing systems must complement the explicit information derived from …

We introduce Multi-SimLex, a large-scale lexical resource and evaluation benchmark covering datasets for 12 typologically diverse …

Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of …

Can we construct a neural language model which is inductively biased towards learning human language? Motivated by this question, we …

Semantic specialization integrates structured linguistic knowledge from external resources (such as lexical relations in WordNet) into …

Unsupervised pretraining models have been shown to facilitate a wide range of downstream applications. These models, however, still …

Semantic specialization methods fine-tune distributional word vectors using lexical knowledge from external resources (e.g., WordNet) …

Linguistic typology aims to capture structural and semantic variation across the world’s languages. A large-scale typology could …

Semantic specialization is the process of fine-tuning pre-trained distributional word vectors using external lexical knowledge (eg, …