Photo of Marco Kuhlmann

Marco Kuhlmann

Deputy Head of Department, Professor, Head of Unit

Head of the Natural Language Processing Group

Research in natural language processing (NLP)

My research is in the area of natural language processing (NLP). I combine concepts and methods from theoretical computer science and machine learning to design new algorithms for natural language understanding, and apply these algorithms to practical problems in human language technology and text mining.

From Text to Meaning

Building computers that understand human language is one of the central goals of artificial intelligence. A key technology in language understanding is semantic parsing: the automatic mapping of a sentence into a formal representation of its meaning, such as a search query, robot command, or logic formula. Together with my coauthors, I have
developed fundamental algorithms for parsing to a widely-used type of meaning representation called dependency graphs, and contributed to a better understanding of the neural network architectures that define the state of the art for this task. We have compiled benchmark data sets for the development and evaluation of dependency parsers, and coordinated community-building efforts aimed at the comparison of different parsing systems and meaning representations.

Understanding Representations

A recent breakthrough on the way towards natural language understanding is the development of deep neural network architectures that learn contextualized representations of language, such as BERT and GPT-3. While this has substantially advanced the state of the art in natural language processing for a wide range of tasks, our understanding of the learned representations and our repertoire of techniques for integrating them with other knowledge sources and reasoning facilities remain severely limited. In my current research project – a collaboration with Chalmers University of Technology and the company Recorded Future – my group develops new methods for the interpretation, grounding, and integration of contextualized representations of language.

Teaching

I am passionate about teaching and working with students. My main driving force as a researcher is the creative energy that I find in making complicated matters simple, and the very same energy also motivates me as a teacher. 
Today, most of my teaching is linked to my research in natural language processing and machine learning, but I have taught courses and supervised students in many different areas of computer science, and at all levels of university education.

I am involved in several degree programmes at the Faculty of Science and Engineering and the Faculty of Arts and Sciences, and act as the examiner for the following courses:

Language Technology 

Natural Language Processing

Text Mining

Deep Learning for Natural Language Processing

News

Speaker at the European Researcher's Night at LiU

ChatGPT and other AI assistants: Possibilities and challenges

Technological developments in the area of language models and generative AI have been fast, leading to reactions. Many people are wondering which way developments are going and what consequences this will have for society.

In a 15-minute talk, Professor Marco Kuhlmann covers the basics of how language models such as ChatGPT work, gives examples of their applications and highlights various technological, social and environmental challenges linked to AI assistants.

From European Researcher's Night at Linköping University 2023

Publications

2023

Tobias Norlund, Ehsan Doostmohammadi, Richard Johansson, Marco Kuhlmann (2023) On the Generalization Ability of Retrieval-Enhanced Transformers Findings of the Association for Computational Linguistics, p. 1485-1493
Oskar Holmström, Jenny Kunz, Marco Kuhlmann (2023) Bridging the Resource Gap: Exploring the Efficacy of English and Multilingual LLMs for Swedish Proceedings of the Second Workshop on Resources and Representations for Under-Resourced Languages and Domains (RESOURCEFUL-2023), p. 92-110
Ehsan Doostmohammadi, Tobias Norlund, Marco Kuhlmann, Richard Johansson (2023) Surface-Based Retrieval Reduces Perplexity of Retrieval-Augmented Language Models Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), p. 521-529
Emanuel Sanchez Aimar, Arvi Jonnarth, Michael Felsberg, Marco Kuhlmann (2023) Balanced Product of Calibrated Experts for Long-Tailed Recognition 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), p. 19967-19977 Continue to DOI

2022

Ehsan Doostmohammadi, Marco Kuhlmann (2022) On the Effects of Video Grounding on Language Models Proceedings of the First Workshop on Performance and Interpretability Evaluations of Multimodal, Multipurpose, Massive-Scale Models

About the division

Colleagues at AIICS

About the department