Deep Learning

Deep neural networks are an essential part of the machine learning toolbox. This course provides an introduction to these models, surveys their applications, and provides practical experience with relevant software libraries.

Over the past few years, neural networks have enjoyed a major resurgence in machine learning, and today yield state-of-the-art results in various fields. This course provides an introduction to deep neural network models, and surveys some the applications of these models in areas where they have been particularly successful. The course covers deep feed-forward networks, recurrent networks, convolutional networks, as well general topics such as input encoding and training techniques. The course also provides practical experience with some of the software libraries available for building and training deep neural networks.


This course is designed for professionals with an existing university education who want to acquire specialized knowledge and skills in the area of neural networks and deep learning. As background knowledge we recommend courses in calculus (derivatives), linear algebra (matrices, vectors), and probability theory. Participants should also have programming experience. The course will use Python, an industry standard in the area of machine learning.

Course information

The course is given in the form of lectures and computer labs. The lectures present basic concepts and methods in deep learning, as well as applications from two areas where deep learning has been particularly successful – natural language processing and computer vision. The labs will give participants hands-on experience with implementing, training, applying, and evaluating deep learning architectures using existing software libraries.

Participants who have passed all (three) obligatory computer labs will receive a course certificate.

The course will be given in English.


October 15 to November 28, schedule (PDF).


The lectures, lab sessions, and seminars take place in the B-building and the E-building at Campus Valla, Linköping University.


SEK 7400 (excluding VAT)

Course coordinator

Marco Kuhlmann


The main book for the course

  • Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.

Additional reading consists of excerpts from the following books

  • Christopher M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1996.
  • Yoav Goldberg. Neural Network Models in Natural Language Processing. Morgan & Claypool, 2017.
  • Simon O. Haykin. Neural Networks and Learning Machines. Third edition. Prentice Hall, 2008.


Application is closed.