farad59

Farnaz Adib Yaghmaie

Assistant Professor

My research lies at the intersection of control and machine learning. I focus on redefining machine learning paradigms for control problems.

Research activities

I am interested in exploring learning paradigms (including Reinforcement Learning (RL)) within the context of Artificial Intelligence (AI), defining and learning solutions for control problems. My work encompasses the design of generalist agents for control, the application of large language models (LLMs) in control scenarios, and the integration of generative AI within control systems.

Foundation models and reinforcement learning: A symphony for general-purpose control, 2025-now

One of the main challenges in control is generalization to diverse and unseen tasks. Conventional control methods and modern Reinforcement Learning (RL) approaches have focused on task-specific solutions or a tabula rasa approach. These methods learn to solve one task from scratch without incorporating broad knowledge from other datasets, partly due to the incongruity of data from different systems. As a result, sequential decision-making algorithms struggle with generalization. Additionally, the successful deployment of sequential decision-making algorithms on physical systems requires the development of new learning mechanisms that can make appropriate decisions on the fly, utilizing data collected from other tasks. In this project, we are going to study how decision-making algorithms can 1) adapt to modalities not covered in the datasets, 2) generalize to unseen tasks, and 3) adapt to a specific task. The intersection of foundation models and RL holds tremendous promise for creating powerful control systems that can switch and adapt to a diverse range of tasks.

Abbas Pasdar will start his Ph.D. studies on this project. Congrats Abbas!

Online Control in Presence of Adversary, 2022-now

In many practical applications, the noise in the system is not Gaussian. Indeed, the noise in safety-critical systems might be designed by adversaries to deteriorate the performance of the learning systems. The focus of this project is on developing online control algorithms that can successfully accomplish the task even in the presence of adversarial noises. More specifically, we investigated fully observable linear systems subject to adversarial process noise, where the noise can be stochastic, deterministic, designed to be worst-case, or intended to degrade performance.

Reinforcement Learning (RL) with Continuous State and Action Space, 2021-now

In today’s fast-paced tech world, there is growing interest in learning algorithms for dynamical systems that can be deployed solely based on sensory data without the need for explicit modeling. Reinforcement learning (RL) is key to handling unknown systems; however, it faces challenges with limited sensor data, often lacking complete state information. The main focus of this project is on designing RL algorithms for partially observable dynamical systems based on input-output data with theoretical guarantees.

I am looking for a qualified Ph.D. student for this project. If you are interested, contact me.

Multi-agent Systems, 2013-2019

During my Ph.D., my research study focused on distributed control of linear heterogeneous multi-agent systems. More specifically, I obtained necessary and sufficient conditions for a group of linear heterogeneous agents to achieve a desired collective behavior like output regulation, bipartite output regulation, multi-party output regulation and formation control. I also developed RL techniques for distributed control of multi-agent systems.

SLAM and Mobile Robot Navigation, 2009-2011

I am also interested in Simultaneous Localization And Mapping (SLAM) in Dynamic Environment Using Grid Based Map. During my master's study, I worked on navigation of a nonholonomic mobile robot in dynamic environment. The primal task of the robot was to do SLAM in a dynamic environment and for this purpose, I proposed algorithms to distinguish between dynamic and static obstacles, and to re-do path planning.

Education

I received my bachelor and master's degrees from K. N. Toosi University of Technology, Tehran, Iran in 2009 and 2011. In 2013, I joined Nanyang Technological University (NTU) in Singapore as a Ph.D. Student and I received my Ph.D. degree with the "Best Thesis award" in 2017.

Ph.D., Electronic and Electrical Engineering
Nanyang Technological University (NTU), Singapore

Master Degree, Electrical Engineering, Control
K. N. Toosi University of Technology, Tehran, Iran

Bachelor Degree, Electrical Engineering, Control
K. N. Toosi University of Technology, Tehran, Iran

For more information about me, please visit;


Googlescholar

Linkedin

Research Gate

Github

NEWS

  • 2025 Sep: I will give a Ph.D. level course on Advanced Robotics at ISY, Linköping University. If you are interested please contact me.

  • 2025 Jun: I will chair the linear systems session at ECC, Thessaloniki, Greece!

  • 2025 May: I will give my Docent lecture on 14 May, 13:15-14:15 at Ada Lovelace, ISY.

  • 2025 May: There are many master projects and Ph.D. projects available within the competence centre SEDDIT. Check the homepage for SEDDIT.

  • 2024 Sep-Dec: I gave a Ph.D. level course on Reinforcement Learning. For the lectures, exercises and codes, please visit the homepage of the course.

  • 2024 Sep-Dec: I was a teach in the Ph.D. level course on Reinforcement Learning, WASP.

  • 2024 January: Together with other researchers from the Automatic Control and Vehicular Systems divisions at ISY, Linköping University and Uppsala university, we started a competence centre called SEDDIT. Svante Gunnarson is the director of the centre.

  • 2023 Jan-Dec: We published two papers at Transactions on Machine Learning Research (TMLR), see here for the first paper and here for the second paper.

  • 2022 Sep-Dec: I was a teach in the Ph.D. level course on Reinforcement Learning, WASP. The course received the excellent evaluation of 3.9/5 and ranked the second best Ph.D. course in WASP.

  • 2021 Apr: The second day of Reinforcement Learning workshop is on 6 April. Try some of the simplest RL algorithms in your browsers now!

  • 2021 Mar: You can now go through our simple handout about Reinforcement Learning entitled "A Crash Course on RL" on Arxiv: short, easy to read and comprehensive!

  • 2021 Mar: The first day of Reinforcement Learning workshop is on 13 March.

  • 2021 Jan: Check out the webpage for a Crash Course on RL!

  • 2021 Jan: I will have a workshop on RL for control at LiU, Linköping, Sweden in March. More details coming soon!

  • 2020 Nov: Checkout my Github page for a crash course on RL. Find out how to implement RL for problems with continuous and discrete action spaces

  • 2020-Sep: I received a CENIIT grant!

Research activities

Heading

Publications

Farnaz Adib Yaghmaie, Hamidreza Modares, Bahare Kiumarsi,  On the performance of memory-augmented controllers, 2024 EUROPEAN CONTROL CONFERENCE, ECC 2024, pp. 1183-1189, IEEE (2024)  https://doi.org/10.23919/ECC64448.2024.10590810

Farnaz Adib Yaghmaie, Hamidreza Modares, Fredrik Gustafsson,  Reinforcement Learning for Partially Observable Linear Gaussian Systems Using Batch Dynamics of Noisy Observations, IEEE Transactions on Automatic Control 69:6397-6404 (2024)  https://doi.org/10.1109/TAC.2024.3385680

Amir Modares, Nasser Sadati, Babak Esmaeili, Farnaz Adib Yaghmaie, Hamidreza Modares,  Safe Reinforcement Learning via a Model-Free Safety Certifier, IEEE Transactions on Neural Networks and Learning Systems 35:3302-3311 (2024)  https://doi.org/10.1109/TNNLS.2023.3264815

Farnaz Adib Yaghmaie, Fredrik Gustafsson, Lennart Ljung,  Linear Quadratic Control Using Model-Free Reinforcement Learning, IEEE Transactions on Automatic Control 68:737-752 (2023)  https://doi.org/10.1109/TAC.2022.3145632

Farnaz Adib Yaghmaie, Hamidreza Modares,  Online Optimal Tracking of Linear Systems with Adversarial Disturbances, Transactions on Machine Learning Research (2023)

Staff Automatic Control

About the division

About the department