Natural language is an extremely flexible way of communication. It allows us to quickly say what we want when talking to someone, to formulate complex ideas in long texts, to speak about real and non-real worlds, and to use words in combination with other means of non-verbal communication. When using language to verbalize our thoughts, we usually have a huge space of possibilities: what exactly do we say and how exactly do we say it? In this process, many of the decisions involved are not governed by the grammar of a language, but speakers tailor them to particular situations, goals, audiences. The general aim of my research is to understand these flexible, communicative aspects of language use, by building machines that model text and dialogue.

I am particularly interested in:

  • machine learning methods in natural language generation, both for text and dialogue
  • combining natural language analysis and generation
  • reference and referring expression (generation)
  • multimodal semantics and its connection to reference
  • visual language grounding
  • conversational aspects of spoken language and their modeling (e.g. hesitations, installments)

Best paper awards

Current interests

  • modeling object naming for language & vision
  • computational pragmatics (see recent ACL paper )
  • dynamic or incremental language generation for situated dialogue
  • integrating speech synthesis and language generation
  • reference and distributional semantics
  • leveraging NLG methods for digital humanities
  • multimodal digital humanities