enrollment key: ml25

Moodle self-registration key: ITP25

This course introduces the proof assistant Lean 4, its type-theoretic foundations, and its applications to computer science and mathematics.

Proof assistants are pieces of software that can be used to check the correctness of a specification of a program or the proof of a mathematical theorem. In the practical work, we learn to use Lean. We will see how to use the system to prove mathematical theorems in a precise, formal way, and how to verify small functional programs. In the course, we focus on Lean’s dependent type theory and on the Curry–Howard correspondence between proofs and functional programs (λ-terms). These concepts are the basis of Lean but also of other popular systems, including Agda, Matita, and Rocq.

There are no formal prerequisites, but familiarity with functional programming (e.g., Haskell) and basic algebra is an asset. If you are new to functional programming, we recommend that you read the first chapters of Learn You a Haskell for Great Good!, stopping at the section “Only folds and horses”.

This course will give a general introduction to methods for modeling and learning preferences, with a specific focus on methods for ranking. The topic of preferences has recently attracted considerable attention in artificial intelligence (AI) research, notably in fields such as autonomous agents, constraint satisfaction, planning, and information retrieval. Preferences provide a means for specifying desires in a declarative way, which is a point of critical importance for AI. Drawing on past research on knowledge representation and reasoning, AI offers qualitative and symbolic methods for treating preferences that can reasonably complement hitherto existing approaches from other fields, such as decision theory. Needless to say, however, the acquisition of preference information is not always an easy task. Therefore, not only are modeling languages and suitable representation formalisms needed, but also methods for the automatic learning, discovery, modeling, and adaptation of preferences.


Schedule Location Instructor
Lecture Tuesday,    16 - 18  c.t. M014, HGB
Prof. Dr. Eyke Hüllermeier
Exercise A Thursday,  14 - 16  c.t.
A014,  HGB
Yusuf Sale
Exercise B Friday,        10 - 12  c.t.
M110,  HGB
Yusuf Sale

The enrollment key is:  PlackettLuce

In case of questions, please contact yusuf.sale@ifi.lmu.de.


see https://www.bio.ifi.lmu.de/studium/ss2025/vlg_sysb/index.html

Please also check the following link:
https://campus.tum.de/tumonline/ee/ui/ca2/app/desktop/#/slc.tm.cp/student/courses/950802886?$scrollTo=toc_overview

Vorlesung/Lecture:

Mo, 11 – 13 Uhr c.t., Arcisstr. 21, Raum 1180
28.04.- 21.07.2025
2 SWS

Übung/Exercises:
Mo, 13 15 Uhr c.t., Arcisstr. 21, Theresianum 0606
Beginn: 28.04. – 21.07.2025
2 SWS

Self-Enrollment (only for participants to the exercises): 21.02.-15.04.2025
Key: FoundationModels25
The waiting list is closed!

Previous Knowledge Expected

Expected is a basic knowledge of machine learning and programming, obtained through programming courses and courses on Machine Learning

Content       
Foundation models are machine learning models that are trained on large amounts of data in an un-supervised manner, and are fine-tuned to perform a given or a wide range of tasks. Examples of foundation models are large language models (e.g., generative pretrained transformers (GPTs)), multi-modal models such as Contrastive Language–Image Pretraining (CLIP), and image generating models such as dalle-e and stable diffusion.

In this course, students will learn technical aspects about foundation models: what foundation models are, how they work, and how they are trained and fine-tuned. We discuss large language models, multi-modal models (CLIP), and image generation models (stable diffusion). Those models are trained on large datasets. We will discuss how to train such models on high-performance computers and the engineering challenges associated with training and handling large amounts of data. For downstream tasks those models are fine-tuned and aligned. We will discuss fine-tuning and alignment strategies. Data is critical for the performance of foundation models. We will discuss data curation techniques and concepts such as scaling laws for working with foundation models.


Vorlesung/Lecture:
Do 12 – 14 Uhr c.t., Hauptgebäude, M 105
24.04.-24.07.2025
2 SWS

Übung/Exercises:
Fr 10 – 12 Uhr c.t., Hauptgebäude, E 004
25.04.-25.07.2025
2 SWS

Self-Enrollment (only for participants to the exercises): 21.02.-15.04.2025
Key: ImageVideo25
The waiting list is closed!


Machine Learning and Computer Vision – teaching machines to solve tasks and, in particular, learning to see – has come to a point where it is powering numerous practical applications and tools: The resulting algorithms enable to search enormous volumes of data and to retrieve relevant text, images, videos, or music from large collections or the internet. Moreover, computers are now making progress at “understanding” text and images to the point that they can answer queries in natural text, synthesize a musical tune, or create images and videos. Recent breakthroughs in image synthesis such as ‘stable diffusion’ are now already generating images from a mere user description of the content. Consequently, AI is currently transforming the abilities of computers: they can now support users in ever more complex tasks and do so with much greater ease and flexibility. But, is the machinery underneath actually understanding the content? What is missing and what will also be future limitations?

This course will review the Machine Learning and Computer Vision fundamentals underlying the present revolution in this field. We will discuss how some important basic Machine Learning algorithms practically work and how they can be employed in real applications. We will study what is necessary for a computer to “see”. Then we will review current state-of-the-art approaches for text and image retrieval, and demo algorithms for generating text, music, images, and video.

There will be a written exam (Klausur) at the end of the semester.


Topic: Methods in Software Engineering will introduce and discuss topics that are relevant to design and implementation of software systems, as well as quality assurance and maintenance.

Language: The lecture will be held in English.

Moodle Password: MSE Rocks!

Target audience:

  • Master students. This lecture is part of "Schwerpunkt Programming, Software-Verification, and Logic".
  • Bachelor students with solid knowledge in programming and SWT. The lecture can be recognized as WP module.

Times

  • Lecture: Mi 10-12 LEHRTURM-VU107
  • Tutorial: Di 12-14 LEHRTURM-VU107
Examination
  • Written exam (90mins, 6 ECTS). Date for the main exam: end of July or early August. Re-take exam will happen early October.
  • Important: There will be additional and/or more challenging questions in the exam at the master level based on a practical project that is running throughout the semester.
More organizational information will be provided at a later date.

Recommended Knowledge: Object-oriented programming, software engineering basics

The rapid digitalization of science, industry, and society has led to an unprecedented increase in data generation, necessitating scalable and efficient data processing, storage, and analytics solutions, as well as enabling AI and machine learning. The exponential growth of machine-generated data—such as sensor readings, server logs, and transactional records—poses significant computational challenges. The proliferation of the Internet of Things (IoT) continues to accelerate this data explosion, reinforcing the need for advanced methodologies in high-performance and distributed computing.

This course explores the foundations and practical implementations of large-scale data processing, distributed machine learning, and scalable AI solutions. We will investigate computational frameworks designed for high-throughput data analytics, deep learning, and generative AI, with a focus on modern transformer-based architectures and foundation models such as LLaMA and DeepSeek. The curriculum also covers emerging trends in quantum machine learning and AI ethics, ensuring a comprehensive understanding of both technological advancements and their broader implications.

Students will engage with state-of-the-art distributed computing frameworks such as Spark/Dask/Ray and Pytorch/Transformers to develop scalable AI solutions with applications spanning computer vision, natural language processing (NLP), and generative AI.

This class will cover the following topics:
  • Data applications in industry and sciences
  • Data-intensive methods in high performance computing
  • Large-scale data processing using Spark, Dask, Flink and Ray
  • SQL for unstructured data: Hive, Spark-SQL, Presto
  • Stream processing: Kafka, Spark Streaming, Flink
  • Data science and machine learning: unsupervised and supervised methods, tools (numpy, pandas, scikit-learn)
  • Deep learning for computer vision: convolutional neural networks (Pytorch))
  • atural language processing: word embeddings, large language models (RNNs, LSTMs, Transformers) incl. recent development (reasoning models like DeepSeek)
  • Quantum machine learning
  • AI ethics and responsible AI
The course will be offered as a block lecture. The lecture will be held in English.

Course Page: https://www.nm.ifi.lmu.de/teaching/Vorlesungen/2025ss/data-analytics/

Lecture with exam. 

The field of natural computing considers and employs algorithms and methods that are adopted or inspired by natural phenomena. Accordingly, this course will cover topics like, e.g., cellular automata, quantum computing, artificial chemistry systems, evolutionary algorithms and other optimization methods, and swarms including ant algorithms, whose concepts are derived from physics, chemistry, or biology.

Registration key: natural25

This course covers advanced techniques for automatic software verification, especially those in the field of software model checking. It continues the Bachelor course Formal Verification and Specification (FSV). Knowledge from FSV is helpful but not mandatory. This course can be used for the specialization "Programming, Software Verification, and Logic" in the MSc computer science (cf. German site on specializations).

Topics

The course will cover the following topics:

  1. Mathematical foundation for software verification
  2. Configurable program analysis
  3. Strongest postcondition
  4. Predicate abstraction with a fixed precision
  5. Craig interpolation and abstraction refinement (CEGAR)
  6. Predicate abstraction with precision adjustment
  7. Bounded model checking and k-induction
  8. Observer automata
  9. Verification witnesses
  10. Test generation and symbolic execution
  11. Cooperative verification
  12. Project: implementing a software verifier (2 weeks)

Reference Materials

Organization

The course consists of weekly lectures and tutorials. Important announcements are sent via Moodle messages.

Time Slots and Rooms

  • Lecture: 10:15 - 12:00, Wednesday, in M 101, HGB (by Dr. Thomas Lemberger)
  • Tutorial: 14:15 - 15:45, Thursday, in D Z007, HGB (by Marek Jankola)

The first lecture session on 2025-05-07 will be about the detailed course organization and expectations. The first tutorial session will be on 2025-04-24.

Enrollment

Please enroll with the following key: wr$NUB#L9Zvl0&W37Ynx


Die Vorlesung behandelt voraussichtlich die folgende Themen:

  • Evolutionary Trees
    • Perfekte Phylogenie
    • ultrametrische Bäume
    • additive Bäume
    • kompakt additive Bäume
    • Sandwichprobleme
    • Splits und Splits-Graphen
  • Physical Mapping und Assembly
    • PQ-Bäume
    • Intervall-Graphen und parametrisierte Sandwich-Probleme
Voraussetzung: Stoff des Bioinformatik bzw. Informatik Grundstudiums
Der erfolgreiche Besuch der Veranstaltung Algorithmische Bioinformatik I ist empfehlenswert.

Registration Key: SWTestSS2025

Lecture
Tuesday 12pm-2pm, Prof.-Huber-Pl. 2 (W), LEHRTURM-W201
First lecture on April 29th, 2025
Exercise
Thusrday 2pm - 4pm, Main Building (i.e., Geschw.-Scholl-Pl. 1 (M)), M 014
First exercise on May 8th, 2025

The course targets Master students in computer science or related programs. We expect participants to already have experience in programming and basic knowledge about software engineering.

The focus of the course is on testing functional behavior. This course introduces basic terms used in the area of software testing and looks into the process of test-case and test-suite generation. To this end, the course discusses the test oracle problem, i.e., whether or when a test is successful and the result of a test is expected. Also, the course covers different manual and automatic approaches for input generation, thereby distinguishing between black-, grey-, and whitebox techniques. Furthermore, the course compares various metrics to judge the adequacy of test suites. In addition, the course studies the issue of regression testing.

At the end of the course, you should be able to

  • explain basic testing terms
  • describe the test oracle problem
  • explain approaches that make correctness requirements, i.e., the expected test outcomes, executable 
  • formulate automatically checkable correctness requirements for requirements given in natural language
  • name and explain the studied input generation techniques and apply them to example programs
  • name, define, explain and distinguish the studied adequacy criteria for test suites, apply adequacy criteria to given test suites, and compare test suites based on adequacy criteria
  • describe techniques for regression testing and apply them to examples
  • discuss advantages, disadvantages, and limitations of the studied techniques

The lecture will cover a selection of advanced topics from the area of concurrent and parallel computing, such as:

  • Memory consistency models
  • C++ atomic types and operations
  • Synchronization algorithms (barriers, locks)
  • Concurrent objects
  • Lock-freedom and wait-freedom

This lecture explains the basics of quantum computing, including:

  • Mathematical foundations
  • Quantum bits (qubits) and quantum circuits
  • Superposition, entanglement and interference
  • Quantum oracle algorithms and variational algorithms
  • Complexity of quantum algorithms and the need for new complexity classes
  • Shor's algorithm and the implications for modern cryptography
  • Quantum communication and cryptography
  • Hardware limitations and error correction
In the tutorials, this knowledge will be deepened and quantum algorithms will be implemented using simulators and real quantum hardware.

The enrollment key is: HackerP.Shor

This lecture brings together two trending topics of technology-enhanced learning that have much overlap in terms of providing feedback and self-reflection.

On the one hand, concepts and methods, specific approaches, and standards in E-Assessment are discussed in detail, e.g., formative vs. summative; item-generation, assessment tools.

On the other hand, objectives, approaches, architectures, and standards for Learning Analytics are discussed with a focus on the application in the TEL domain, e.g., social network analysis, recommender systems, clustering, information visualizations.

In combination, E-Assessment and Learning Analytics can provide solid opportunities for optimizing learning and teaching.

The examination will be a combination of a group project and an oral examination.

Einschreibeschlüssel: TEL

In many modern application areas, data scientists face challenges that go beyond the basic techniques being introduced in the basic module Data Mining Algorithms I. The module Data Mining Algorithms II covers advanced techniques to handle large data volumes, volatile data streams, complex object descriptions, and linked data. These topics are also known as the three major challenges (Volume, Velocity, Variety) in Big Data Analysis. The module is directed at master students who are interested in developing and designing knowledge discovery processes for various types of applications. This includes the development of new data mining and data preprocessing methods as well as the ability to select the best-suited established approach for a given practical challenge.

Enrolment key: DMA2_2025_key

Computer Games and Games related formats are an essential branch of the media industry, with sales exceeding those of the music or the movie industry. In many games, building a dynamic environment with autonomously acting entities is necessary. This comprises any types of mobile objects, non-player characters, computer opponents, or the dynamics of the environment itself. To model these elements, techniques from the area of Artificial Intelligence allow for modeling adaptive environments with interesting dynamics. From the point of view of AI Research, games currently provide multiple environments that allow the development of breakthrough technology in Artificial Intelligence and Deep Learning. Projects like OpenAIGym, AlphaGo, OpenAI5, and Alpha-Star earned much attention in the AI research community and the broad public. The reason for the importance of games for developing autonomous systems is that games provide environments that usually allow fast throughputs and provide clearly defined tasks for a learning agent to accomplish. The lecture provides an overview of techniques for building environment engines and making these suitable for large-scale, high-throughput games and simulations.
Furthermore, we will discuss the foundations of modeling agent behavior and how to evaluate it in deterministic and non-deterministic settings. Based on this formalism, we will discuss how to analyze and predict agent or player behavior. Finally, we will introduce various techniques for optimizing agent behavior, such as sequential planning and reinforcement learning.

The EnrollmentKey will be announced in the lectures.

Component When Where Starts at
Lecture Tue, 13:00 - 16:00 h Amalienstr. 73A - 112 29.04.2025
Tutorial 1 Wed, 14:00 - 16:00 h Amalienstr. 73A - 220 07.05.2025
Tutorial 2 Wed, 16:00 - 18:00 h Amalienstr. 73A - 220 07.05.2025