Jenny Kunz

PhD student

Interpretable and explainable NLP.

Interpretable and explainable NLP

Many current Natural Language Processing (NLP) models are large neutral networks that are pre-trained on huge amounts of unlabeled data. How these models store, combine and use information from this self-supervised training is still largely obscure. I develop techniques that probe how linguistic information is structured within the model, and what the limitations of current models are.

Another research interest of mine are self-rationalizing models that generate free-text explanations along with their predictions. While textual explanations are flexible and easy to understand, they come with challenges such as a speculative relation to the prediction and the inheritance of possibly undesirable properties of human explanations, such as the ability to convincingly justify wrong predictions. I work on the evaluation and control of such explanations, and on the relation between explanation design and utility.

CV in brief

  • Bachelor’s degree in Computer Science from Humboldt University of Berlin (2016).
  • Master’s degree in Language Technology from Uppsala University (2018).
  • PhD Student at LiU (2019-today)
  • Best paper award for our Paper “Human Ratings Do Not Reflect Downstream Utility: A Study of Free-Text Explanations for Model Predictions” at BlackboxNLP 2022.
  • Teaching Assistant: Text Mining, Language Technology, Natural Language Processing, Language and Computers, Neural Networks and Deep Learning.

Publications

2023

Oskar Holmström, Jenny Kunz, Marco Kuhlmann (2023) Bridging the Resource Gap: Exploring the Efficacy of English and Multilingual LLMs for Swedish Proceedings of the Second Workshop on Resources and Representations for Under-Resourced Languages and Domains (RESOURCEFUL-2023), p. 92-110

2022

Jenny Kunz, Martin Jirénius, Oskar Holmström, Marco Kuhlmann (2022) Human Ratings Do Not Reflect Downstream Utility: A Study of Free-Text Explanations for Model Predictions Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, p. 164-177, Article 2022.blackboxnlp-1.14
Jenny Kunz, Marco Kuhlmann (2022) Where Does Linguistic Information Emerge in Neural Language Models?: Measuring Gains and Contributions across Layers Proceedings of the 29th International Conference on Computational Linguistics, p. 4664-4676, Article 1.413

2021

Jenny Kunz, Marco Kuhlmann (2021) Test Harder Than You Train: Probing with Extrapolation Splits Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, p. 15-25, Article 2 Continue to DOI

About the division

Colleagues at AIICS

About the department