Beginning: Wednesday, 15.04.2026, 13.00 - 15.00 c.t.
Room: Hörsaal 1100, TUM
0501.01.100 (Hörsaal ohne exp. Bühne)
Vorlesung/Lecture: Wednesday, 13.00 - 15.00 c.t.
Room: Hörsaal 1100, TUM
2 SWS
Übung/Exercises: Wednesday, 15.00 - 16.30 h, directly after the Lecture
Room: Hörsaal 1100, TUM
2 SWS
Previous Knowledge Expected
Expected is a basic knowledge of machine learning and programming, obtained through programming courses and courses on Machine Learning
Content
Foundation models are machine
learning models that are trained on large amounts of data in an
un-supervised manner, and are fine-tuned to perform a given or a wide
range of tasks. Examples of foundation models are large language models
(e.g., generative pretrained transformers (GPTs)), multi-modal models
such as Contrastive Language–Image Pretraining (CLIP), and image
generating models such as dalle-e and stable diffusion.
In this course, students will learn technical aspects about foundation models: what foundation models are, how they work, and how they are trained and fine-tuned. We discuss large language models, multi-modal models (CLIP), and image generation models (stable diffusion). Those models are trained on large datasets. We will discuss how to train such models on high-performance computers and the engineering challenges associated with training and handling large amounts of data. For downstream tasks those models are fine-tuned and aligned. We will discuss fine-tuning and alignment strategies. Data is critical for the performance of foundation models. We will discuss data curation techniques and concepts such as scaling laws for working with foundation models.
- Docente: Ursula Fantauzzo
- Docente: Björn Ommer
Vorlesung/Lecture:
Do 12 – 14 Uhr c.t., Geschwister-Scholl-Platz 1, Hauptgebäude, M 105
16.04.-16.07.2026
2 SWS
Übung/Exercises:
Fr 10 – 12 Uhr c.t., Geschwister-Scholl.Platz 1, Hauptgebäude, D 209
17.04.-17.07.2026
2 SWS
This
lecture focuses on Generative AI, deep learning approaches in computer
vision that are generative so they cannot only analyze existing scenes,
but in particular synthesize novel images and video.
Modern deep
learning has fundamentally changed artificial intelligence. Computer
vision was at the forefront of many of these developments and has
tremendously benefited over the last decade from this progress. Novel
applications as well as significant improvements to old problems
continue to appear at a staggering rate. Especially the areas of image
and video synthesis and understanding have seen previously unthinkable
improvements and provided astounding visual results with wide-ranging
implications (trustworthiness of AI, deep fakes).
We will discuss how
a computer can learn to understand images and videos based on deep
neural networks. The lecture will briefly review the necessary
foundations of deep learning and computer vision and then cover the
latest works from the quickly developing field of Generative AI. The
practical exercises that accompany this course will provide hands-on
experience and allow attendees to practice while building and
experimenting with powerful image generation architectures.
Topics include but are not limited to:
* Image & video synthesis
* Visual superresolution and Image completion
* Artistic style transfer
* Interpretability, trustworthyness of deep models
* Self-supervised learning
*
Modern deep learning approaches, such as transformers and
self-attention, invertible neural networks, diffusion models, flow
matching etc.
- Docente: Ursula Fantauzzo
- Docente: Björn Ommer
- Docente: Eyke Hüllermeier
- Docente: Yusuf Sale
This master-level lecture surveys the core implementation techniques behind modern database management systems, with a strong systems-oriented focus. Topics include on-disk and in-memory storage layouts, buffer pool management, indexing structures (B-trees, extendible and linear hashing), query execution operators, and cost-based optimization. The course also covers transaction processing, concurrency control protocols, write-ahead logging, crash recovery, and the fundamentals of distributed query processing and replication. Through readings, design discussions, and programming projects, students analyze real system architectures and learn how theoretical ideas translate into high-performance, fault-tolerant database engines.
- Docente: Marcus Paradies
- Docente: Constantin Pestka
- Lecture: Thursday 7-10, B 001, Oettingenstr. 67
- Lab: Thursday 12-14, A125, Geschw.-Scholl Platz 1
- Enrollment key: ParallelPioneersLMUSS26

- Docente: Sergej-Alexander Breiter
- Docente: Karl Fürlinger
- Lecture: Wednesday 8-10, U127, Oettingenstr. 67
- Lab: Wednesday 12-14, U127, Oettingenstr. 67
- Enrollment key: AtomicFenceSS26
- Memory consistency models
- C++ atomic types and operations
- Synchronization algorithms (barriers, locks)
- Concurrent objects
- Lock-freedom and wait-freedom

- Docente: Karl Fürlinger
This lecture explains the basics of quantum computing, including:
- Mathematical foundations
- Quantum bits (qubits) and quantum circuits
- Superposition, entanglement and interference
- Quantum oracle algorithms and variational algorithms
- Complexity of quantum algorithms and the need for new complexity classes
- Shor's algorithm and the implications for modern cryptography
- Quantum communication and cryptography
- Hardware limitations and error correction
- Docente: Tobias Guggemos
- Docente: Florian Krötz
- Docente: Korbinian Staudacher
- Docente: Xiao-Ting To
This lecture brings together two trending topics of technology-enhanced learning that have much overlap in terms of providing feedback and self-reflection.
On the one hand, concepts and methods, specific approaches, and standards in E-Assessment are discussed in detail, e.g., formative vs. summative; item-generation, assessment tools.
On the other hand, objectives, approaches, architectures, and standards for Learning Analytics are discussed with a focus on the application in the TEL domain, e.g., social network analysis, recommender systems, clustering, information visualizations.
In combination, E-Assessment and Learning Analytics can provide solid opportunities for optimizing learning and teaching.
The examination will be a combination of a group project and an oral examination.
- Docente: Sven Strickroth
- Docente: Volker Tresp
- Docente: Jingpei Wu
- Docente: Gengyuan Zhang
This course introduces the proof assistant Lean 4, its type-theoretic foundations, and its applications to computer science and mathematics.
Proof assistants are pieces of software that can be used to check the correctness of a specification of a program or the proof of a mathematical theorem. In the practical work, we learn to use Lean. We will see how to use the system to prove mathematical theorems in a precise, formal way, and how to verify small functional programs. In the course, we focus on Lean's dependent type theory and on the Curry–Howard correspondence between proofs and functional programs (λ-terms). These concepts are the basis of Lean but also of other popular systems, including Agda, Matita, and Rocq.
There are no formal prerequisites, but familiarity with functional programming (e.g., Haskell) and basic algebra is an asset. If you are new to functional programming, we recommend that you read the first chapters of Learn You a Haskell for Great Good!, stopping at the section "Only folds and horses."
- Docente: Jasmin Blanchette
- Docente: Alexandra Graß
- Docente: Yiming Xu
This course covers advanced techniques for automatic software verification, especially those in the field of software model checking. Knowledge from the Bachelor course Formal Verification and Specification (FSV) is helpful but not mandatory. This course can be used for the specialization "Programming, Software Verification, and Logic" in the MSc computer science (cf. German site on specializations).
Topics
The course covers the following topics:
- Mathematical foundation for software verification
- Configurable program analysis
- Strongest postcondition
- Predicate abstraction with a fixed precision
- Craig interpolation and abstraction refinement (CEGAR)
- Predicate abstraction with precision adjustment
- Bounded model checking and k-induction
- Observer automata
- Verification witnesses
- Test generation and symbolic execution
- LTL and Liveness analysis
- Model Checking for Computational Tree Logic
Reference Materials
- Combining Model Checking and Data-Flow Analysis (Chapter 16 in the Handbook of Model Checking)
- A Unifying View on SMT-Based Software Verification
Organization
The course consists of weekly lectures and tutorials. Important announcements are sent via Moodle messages.
Time Slots and Rooms
- Lecture: Wednesday, 10:00 - 12:00 (s.t., full 2 hours), Geschw.-Scholl-Pl. 1 (D) D Z005, by Thomas Lemberger
- Tutorial: Thursday, 14:15 - 15:45, Geschw.-Scholl-Pl. 1 (D) D Z005, by Marek Jankola
The first lecture is on 2026-04-15. The first tutorial session will be on 2026-04-16.
- Docente: Dirk Beyer
- Docente: Marek Jankola
- Docente: Thomas Lemberger
Overview
Voting is an important part of democratic societies and potentially has a broad impact. Yet, with or without the use of modern technology, voting is full of algorithmic and security challenges, and the failure to address these challenges in a controlled manner may produce fundamental flaws in the voting system and potentially undermine critical societal aspects.
In this lecture, we discuss voting systems from various perspectives, notably social choice theory, security, and cryptography. What should a voting system fulfill? When is a voting system secure, even independent of the involved software? Which mechanisms should be investigated for that matter? Which methods are suitable to address these challenges?
We will investigate cryptographic voting systems, algorithmic tallying procedures, statistical methods to test the reliability of an election result, and distinguish the different layers of a voting system.
Preliminary Knowledge
Foundations on cryptography and security, as taught, for instance, in the lecture "IT-Sicherheit", are recommended. Moreover, we make an effort to coordinate the cryptography parts of the lecture with the practical "Cryptography" that you can optionally take in parallel.
Events
| Type | Time | Place | Lecturer | Begin | End |
|---|---|---|---|---|---|
| Lecture | Tue, 10–12 c.t. | Dr. Michael Kirsten | 14.04.2026 | 14.07.2026 | |
| Exercise | Wed, 12–14 c.t. | Dr. Michael Kirsten | 15.04.2026 | 15.07.2026 |
Course Website
https://www.tcs.ifi.lmu.de/teaching/courses-ss-2026/theory-and-security-of-voting-systems

- Docente: Michael Kirsten
The course targets Master students in computer science or related programs. We expect participants to already have experience in programming and basic knowledge about software engineering.
The focus of the course is on testing functional behavior. This
course introduces basic terms used in the area of software testing and
looks into the process of test-case and test-suite generation. To this
end, the course discusses the test oracle problem, i.e., whether or when
a test is successful and the result of a test is expected. Also, the
course covers different manual and automatic approaches for input
generation, thereby distinguishing between black-, grey-, and whitebox
techniques. Furthermore, the course compares various metrics to judge
the adequacy of test suites. In addition, the course studies the issue
of regression testing.
At the end of the course, you should be able to
- explain basic testing terms
- describe the test oracle problem
- explain approaches that make correctness requirements, i.e., the expected test outcomes, executable
- formulate automatically checkable correctness requirements for requirements given in natural language
- name and explain the studied input generation techniques and apply them to example programs
- name, define, explain and distinguish the studied adequacy criteria for test suites, apply adequacy criteria to given test suites, and compare test suites based on adequacy criteria
- describe techniques for regression testing and apply them to examples
- discuss advantages, disadvantages, and limitations of the studied techniques
- Docente: Marie-Christine Jakobs
- Docente: Márk Somorjai
Advanced Analytics and Machine Learning examines the algorithmic and systems foundations required to realize modern machine learning systems. As many workloads increasingly rely on large datasets and foundation models, performance and reliability are determined not only by learning algorithms, but also by the underlying infrastructure for data management, distributed computation, and operational deployment. The course therefore treats machine learning as an end to end pipeline, spanning data ingestion and storage, large scale processing, model training and evaluation, and inference under resource and latency constraints.
The course develops a principled understanding of scalability through core concepts in high performance computing and distributed systems, including data locality, communication costs, synchronization, fault tolerance, and scheduling. These principles are connected to practical implementations in widely used platforms for batch and streaming analytics (e.g., Spark, Dask, Ray, Flink, Kafka) and deep learning toolchains (PyTorch and the Transformers ecosystem). Particular attention is given to infrastructure and system issues. The curriculum is complemented by responsible AI considerations and an overview of quantum machine learning as an emerging technology.
Topics:
- Foundations of scalable analytics and ML systems (parallelism models, distributed abstractions, communication and synchronization costs, fault tolerance, scheduling)
- Data management and large scale processing (data lake architectures, SQL on semi structured data with Hive, Spark SQL, Presto, batch processing with Spark, Dask, Ray)
- Streaming systems and continuous analytics (stream processing with Kafka and Spark Streaming, state, windows, operational semantics)
- Training systems for machine learning (classical ML workflows at scale, deep learning with PyTorch, distributed training principles, checkpointing and performance analysis)
- Inference systems and operational ML (deployment patterns, throughput and latency modeling, efficiency techniques such as batching and quantization basics, monitoring concepts)
- Emerging technologies (e.g., quantum machine learning).
Dates:
- Saturday, March 7, 2026
- Saturday, March 14, 2026
- Saturday, March 21, 2026
- Saturday, March 28, 2026
- Saturday, April 18, 2026.
- Docente: Andre Luckow
Computer Games and Games related
formats are an essential branch of the
media industry, with sales exceeding those of the music or the movie
industry. In many games, building a dynamic environment with
autonomously acting entities is necessary. This comprises any types
of mobile objects, non-player characters, computer opponents, or the
dynamics of the environment itself. To model these elements, techniques
from the area of Artificial Intelligence allow for modeling adaptive
environments with interesting dynamics. From the point of view of AI
Research, games currently provide multiple environments that allow the
development of breakthrough technology in Artificial Intelligence and
Deep
Learning. Projects like OpenAIGym, AlphaGo, OpenAI5, and Alpha-Star
earned much attention in the AI research community and the broad
public. The reason for the importance of games for developing
autonomous systems is that games provide environments that usually allow
fast throughputs and provide clearly defined tasks for a learning agent
to accomplish. The lecture provides an overview of techniques for
building environment engines and making these suitable for large-scale,
high-throughput games and simulations.
Furthermore, we will
discuss the foundations of modeling agent behavior and how to evaluate
it in deterministic and non-deterministic settings. Based on this
formalism, we will discuss how to analyze and predict agent or player
behavior. Finally, we will introduce various techniques for optimizing
agent behavior, such as sequential planning and reinforcement learning.
- Docente: Zongyue Li
- Docente: Philipp Pfefferkorn
- Docente: Matthias Schubert