Excerpt from course description

Deep Learning and Explainable AI

Introduction

Please note that this is a preliminary course description. The final version will be published in June 2026.

Deep learning models have achieved state-of-the-art performance in tasks such as image classification, representation learning, and data generation. This course explores various deep learning architectures, including convolutional neural networks (CNNs), graph neural networks, and autoencoders; along with structural models, variational autoencoders (VAEs), and diffusion models. Students will gain hands-on experience in implementing and training deep learning models using TensorFlow.

Additionally, as interpretability of the results is crucial in many real-world applications, the course introduces principles and techniques of Explainable Artificial Intelligence (XAI), equipping students with the skills to make deep learning models more transparent and interpretable.

Course content

To address the learning outcomes listed above, the course content covers the following topics:

  • Introduction to deep learning – history, positioning in AI, recent developments and challenges
  • Feedforward neural networks – backpropagation and gradient descent
  • Image processing with CNNs
  • Sequential data modelling with RNNs
  • Social networks modelling with graph neural networks
  • Representation learning with autoencoders
  • Structural modelling with probabilistic graphical models
  • Generative modelling with VAEs and diffusion models
  • General principles of Explainable AI
  • A choice of explainable AI techniques

Disclaimer

This is an excerpt from the complete course description for the course. If you are an active student at BI, you can find the complete course descriptions with information on eg. learning goals, learning process, curriculum and exam at portal.bi.no. We reserve the right to make changes to this description.