From Text to Meaning
Building computers that understand human language is one of the central goals of artificial intelligence. A key technology in language understanding is semantic parsing: the automatic mapping of a sentence into a formal representation of its meaning, such as a search query, robot command, or logic formula. Together with my coauthors, I have
developed fundamental algorithms for parsing to a widely-used type of meaning representation called dependency graphs, and contributed to a better understanding of the neural network architectures that define the state of the art for this task. We have compiled benchmark data sets for the development and evaluation of dependency parsers, and coordinated community-building efforts aimed at the comparison of different parsing systems and meaning representations.
Understanding Representations
A recent breakthrough on the way towards natural language understanding is the development of deep neural network architectures that learn contextualized representations of language, such as BERT and GPT-3. While this has substantially advanced the state of the art in natural language processing for a wide range of tasks, our understanding of the learned representations and our repertoire of techniques for integrating them with other knowledge sources and reasoning facilities remain severely limited. In my current research project – a collaboration with Chalmers University of Technology and the company Recorded Future – my group develops new methods for the interpretation, grounding, and integration of contextualized representations of language.