Photo of Jenny Kunz

Jenny Kunz

Postdoc

Research interests: Parameter-efficient language adaptation, interpretability, explainability and modularisation of NLP models.

Interpretable and explainable NLP

Many current Natural Language Processing (NLP) models are large neutral networks that are pre-trained on huge amounts of unlabeled data. How these models store, combine and use information from this self-supervised training is still largely obscure. I develop techniques that probe how linguistic information is structured within the model, and what the limitations of current models are.

Another research interest of mine are self-rationalizing models that generate free-text explanations along with their predictions. While textual explanations are flexible and easy to understand, they come with challenges such as a speculative relation to the prediction and the inheritance of possibly undesirable properties of human explanations, such as the ability to convincingly justify wrong predictions. I work on the evaluation and control of such explanations, and on the relation between explanation design and utility.

PhD thesis

CV in brief

  • Bachelor’s degree in Computer Science from Humboldt University of Berlin (2016).
  • Master’s degree in Language Technology from Uppsala University (2018).
  • PhD Student at LiU (2019-2024)
    Postdoc at LiU (since 2024)​
  • Teaching: Language and Computers (729G49).
    Previous: Text Mining, Language Technology, Natural Language Processing, Neural Networks and Deep Learning, Foundations of AI and ML.

Publications

2025

Jenny Kunz (2025) Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFT Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), p. 323-330, Article 35 (Conference paper)
Julian Schlenker, Jenny Kunz, Tatiana Anikina, Günther Neumann, Simon Ostermann (2025) Only for the Unseen Languages, Say the Llamas: On the Efficacy of Language Adapters for Cross-lingual Transfer in English-centric LLMs Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), p. 849-871 (Conference paper)
Romina Oji, Jenny Kunz (2025) How to Tune a Multilingual Encoder Model for Germanic Languages: A Study of PEFT, Full Fine-Tuning, and Language Adapters

2024

Jenny Kunz, Marco Kuhlmann (2024) Properties and Challenges of LLM-Generated Explanations Proceedings of the Third Workshop on Bridging Human-Computer Interaction and Natural Language Processing, p. 13-27 (Conference paper) Continue to DOI
Jenny Kunz (2024) Understanding Large Language Models: Towards Rigorous and Targeted Interpretability Using Probing Classifiers and Self-Rationalisation

About the division

Colleagues at AIICS

About the department