Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
mathematical_methods_machine_learning [2025/01/02 11:15] Diaaeldin Tahamathematical_methods_machine_learning [2025/03/13 13:05] (current) – [Schedule] Diaaeldin Taha
Line 21: Line 21:
 ^ Week 48 (2024) | Mon, 25.11.2024 | 15:00–16:30  | MIS, E2 10      | Parvaneh Joharinad | Group Equivariant Neural Networks II   | ^ Week 48 (2024) | Mon, 25.11.2024 | 15:00–16:30  | MIS, E2 10      | Parvaneh Joharinad | Group Equivariant Neural Networks II   |
 ^ Week 49 (2024) | Mon, 02.12.2024 | 15:00–16:30  | MIS, G3 10      | Nico Scherf           | Deep Generative Models   | ^ Week 49 (2024) | Mon, 02.12.2024 | 15:00–16:30  | MIS, G3 10      | Nico Scherf           | Deep Generative Models   |
 +^ Week 50 (2024) | Mon, 09.12.2024 | ---  | ---      | ---           | **NO MEETING**   |
 ^ Week 51 (2024) | Mon, 16.12.2024 | 15:00–16:30  | MIS, E2 10      | Jan Ewald             | On the (Underestimated) Importance of Objective/Loss Functions   | ^ Week 51 (2024) | Mon, 16.12.2024 | 15:00–16:30  | MIS, E2 10      | Jan Ewald             | On the (Underestimated) Importance of Objective/Loss Functions   |
 ^ Week 3 (2025)  | Mon, 13.01.2025 | 14:00–15:30  | MIS, A3 01      | Jan Ewald             | Autoencoder and Their Variants for Biomedical Data   | ^ Week 3 (2025)  | Mon, 13.01.2025 | 14:00–15:30  | MIS, A3 01      | Jan Ewald             | Autoencoder and Their Variants for Biomedical Data   |
Line 26: Line 27:
 ^ Week 5 (2025)  | Mon, 27.01.2025 | 14:00–15:30  | MIS, A3 01      | Duc Luu               | Learning Dynamical Systems II   | ^ Week 5 (2025)  | Mon, 27.01.2025 | 14:00–15:30  | MIS, A3 01      | Duc Luu               | Learning Dynamical Systems II   |
 ^ Week 6 (2025)  | Mon, 03.02.2025 | 14:00–15:30  | MIS, A3 01      | Robert Haase          | Large Language Models for Code Generation   | ^ Week 6 (2025)  | Mon, 03.02.2025 | 14:00–15:30  | MIS, A3 01      | Robert Haase          | Large Language Models for Code Generation   |
-^ Week 7 (2025)  | Mon, 10.02.2025 | 14:00–15:30  | MIS, A3 01      | Guido Montufar        | Foundations of Feature Learning I   | +^ Week 7 (2025)  | Mon, 10.02.2025 | 14:00–15:30  | MIS, A3 01      | Guido Montufar        | An Overview of Theories for Feature Learning in Neural Networks I   | 
-^ Week 8 (2025)  | Mon, 17.02.2025 | 14:00–15:30  | MIS, A3 01      | Guido Montufar        | Foundations of Feature Learning II   | +^ Week 8 (2025)  | Mon, 17.02.2025 | 14:00–15:30  | MIS, A3 01      | Guido Montufar        | An Overview of Theories for Feature Learning in Neural Networks II   | 
-^ Week 9 (2025)  | Mon, 24.02.2025 | 14:00–15:30  | MIS, A3 01      | Paul Breiding         | Computing with Varieties I   | +^ Week 9 (2025)  | Mon, 24.02.2025 | 14:00–15:30  | MIS, A3 01      | Paul Breiding         | Computing with Algebraic Varieties I   | 
-^ Week 10 (2025) | Mon, 03.03.2025 | 14:00–15:30  | MIS, A3 01      | Paul Breiding         Computing with Varieties II   | +^ Week 10 (2025) | Mon, 03.03.2025 | ---  | ---     | ---         | **NO MEETING**   | 
-^ Week 11 (2025) | Mon, 10.03.2025 | 14:00–15:30  | MIS, A3 01      | Angelica Torres       | Varieties in Machine Learning   | +^ Week 11 (2025) | Mon, 10.03.2025 | ---  | ---      | ---         | **NO MEETING**   | 
-^ Week 12 (2025) | Mon, 17.03.2025 | 14:00–15:30  | MIS, A3 01      | Angelica Torres       | Varieties in Machine Learning II   | +^ Week 12 (2025) | Mon, 17.03.2025 | 14:00–15:30  | MIS, A3 01      | Angelica Torres       | Varieties in Machine Learning I   | 
-^ Week 13 (2025) | Mon, 24.03.2025 | 14:00–15:30  | MIS, A3 01      | Marzieh Eidi          | Geometric Machine Learning   |+^ Week 13 (2025) | Mon, 24.03.2025 | 14:00–15:30  | MIS, A3 01      | Angelica Torres       | Varieties in Machine Learning II   | 
 +^ Week 11 (2025) | Mon, 31.03.2025 | ---  | ---      | ---         | **NO MEETING**   | 
 +^ Week 14 (2025)  | Mon, 07.04.2025 | 14:00–15:30  | MIS, A3 01      | Paul Breiding         Computing with Algebraic Varieties II   | 
 +^ Week 15 (2025) | Mon, 14.04.2025 | 14:00–15:30  | MIS, A3 01      | Marzieh Eidi          | Geometric Machine Learning   |
  
 ===== Information ===== ===== Information =====
Line 39: Line 43:
  
 **Speaker**: Diaaeldin Taha (Max Planck Institute for Mathematics in the Science, Germany) **Speaker**: Diaaeldin Taha (Max Planck Institute for Mathematics in the Science, Germany)
 +
 +**Title**: Graph and Topological Neural Networks I & II
  
 **Description**: In these two sessions, we will provide an overview of deep learning with a focus on graph and topological neural networks. We will begin by reviewing neural networks, parameter estimation, and the universal approximation theorem. Then, we will discuss graphs and motivate graph convolutional neural networks by tracing their origins from spectral filters in signal processing. Lastly, we will review recent progress in topological deep learning, particularly focusing on simplicial, cellular, and hypergraph neural networks as extensions of graph neural networks. We will assume a basic familiarity with linear algebra and calculus; all relevant concepts from graph theory and topology will be introduced. **Description**: In these two sessions, we will provide an overview of deep learning with a focus on graph and topological neural networks. We will begin by reviewing neural networks, parameter estimation, and the universal approximation theorem. Then, we will discuss graphs and motivate graph convolutional neural networks by tracing their origins from spectral filters in signal processing. Lastly, we will review recent progress in topological deep learning, particularly focusing on simplicial, cellular, and hypergraph neural networks as extensions of graph neural networks. We will assume a basic familiarity with linear algebra and calculus; all relevant concepts from graph theory and topology will be introduced.
Line 52: Line 58:
  
 **Speaker**: Parvaneh Joharinad **Speaker**: Parvaneh Joharinad
 +
 +**Title**: Group Equivariant Neural Networks I & II
  
 ==== Week 49 (2024) ==== ==== Week 49 (2024) ====
  
 **Speaker**: Nico Scherf **Speaker**: Nico Scherf
 +
 +**Title**: Deep Generative Models
  
 **Description**: In this lecture, I will introduce key concepts underlying deep generative models and provide an overview of various model classes. The focus will then shift to generative adversarial networks (GANs), with a possible introduction to variational autoencoders (VAEs) if time allows. This presentation is conceptual in nature, emphasizing intuitive understanding over theoretical or implementation details. My goal is to offer a clear and accessible overview of these topics. The content is based on Simon Prince's freely available textbook, Understanding Deep Learning (https://udlbook.github.io/udlbook/). **Description**: In this lecture, I will introduce key concepts underlying deep generative models and provide an overview of various model classes. The focus will then shift to generative adversarial networks (GANs), with a possible introduction to variational autoencoders (VAEs) if time allows. This presentation is conceptual in nature, emphasizing intuitive understanding over theoretical or implementation details. My goal is to offer a clear and accessible overview of these topics. The content is based on Simon Prince's freely available textbook, Understanding Deep Learning (https://udlbook.github.io/udlbook/).
Line 62: Line 72:
  
 **Speaker**: Jan Ewald **Speaker**: Jan Ewald
 +
 +**Title**: On the (Underestimated) Importance of Objective/Loss Functions
  
 **Description**: At the core of supervised and unsupervised learning are objective functions that are minimized (maximized) during the training of neural networks or by determining optimal strategies via mathematical modeling. However, despite their importance, they often find surprisingly little attention in publications and presentations to justify modeling and methodological AI decisions. In the lecture, we will discuss why they should get more awareness by exploring examples, summarizing objective/loss function types and ideas, as well as go through common pitfalls. **Description**: At the core of supervised and unsupervised learning are objective functions that are minimized (maximized) during the training of neural networks or by determining optimal strategies via mathematical modeling. However, despite their importance, they often find surprisingly little attention in publications and presentations to justify modeling and methodological AI decisions. In the lecture, we will discuss why they should get more awareness by exploring examples, summarizing objective/loss function types and ideas, as well as go through common pitfalls.
Line 68: Line 80:
  
 **Speaker**: Jan Ewald **Speaker**: Jan Ewald
 +
 +**Title**: Autoencoder and Their Variants for Biomedical Data 
  
 **Description**: Powerful computational methods are crucial to make use of large-scale and high-dimensional data which is generated in biomedical research to gain insights into biological systems. In recent years autoencoders a family of deep learning-based methods show tremendous potential for biomedical research in various scenarios ranging from synthetic generation, translation of data modalities or the incorporation of biological knowledge to gain explainability of embeddings. In the lecture we will go from the basics, to use-cases and finally to a hands-on applying our framework AUTOENCODIX. **Description**: Powerful computational methods are crucial to make use of large-scale and high-dimensional data which is generated in biomedical research to gain insights into biological systems. In recent years autoencoders a family of deep learning-based methods show tremendous potential for biomedical research in various scenarios ranging from synthetic generation, translation of data modalities or the incorporation of biological knowledge to gain explainability of embeddings. In the lecture we will go from the basics, to use-cases and finally to a hands-on applying our framework AUTOENCODIX.
Line 75: Line 89:
  
 **Speaker**: Duc Luu **Speaker**: Duc Luu
 +
 +**Title**: Learning Dynamical Systems I & II
 +
 +**Description**: Open challenges of learning data driven dynamical systems lie not only in their high nonlinearity and complexity, but also in nonautonomous and/or stochastic dependency on additive and multiplicative noises, etc. especially when the tasks are to predict short term future states or to learn the asymptotic structures like attractors or even to control the stability of the system. In these two lectures I will give an overview and report on recent developments in neural network and deep learning methods for learning dynamical systems, in particular for those generated from differential/difference equations.         
  
 ==== Weeks 6 (2025) ==== ==== Weeks 6 (2025) ====
  
 **Speaker**: Robert Hasse **Speaker**: Robert Hasse
 +
 +**Title**: Large Language Models: An Introduction
 +
 +**Description**: Large Language Models (LLMs) are changing the way how humans interact with computers. This has impact on all scientific fields by enabling new ways to achieve for example data analysis goals. In this lecture we will be introduced to LLMs and dive into common applications in the science context. We will see how to generate text, code and images using LLMs and how LLMs can extract information from text and images. We will go through selected prompt engineering techniques enabling scientists to tune the output of LLMs towards their scientific goal and how to do quality assurance in this context.
  
 ==== Weeks 7 & 8 (2025) ==== ==== Weeks 7 & 8 (2025) ====
Line 84: Line 106:
 **Speaker**: Guido Montufar **Speaker**: Guido Montufar
  
-==== Weeks 9 & 10 (2025) ====+**Title**: An Overview of Theories for Feature Learning in Neural Networks I & II 
 + 
 +**Description**: Feature learning, or learning meaningful representations of raw data, has been one of the guiding ideas behind deep learning. In principle and in practice deep neural networks can automatically learn hierarchies of data representations which successively distill the relevant parts of a problem and make it easier to arrive at a good solution. However, developing a theoretical framework to characterize feature learning and the properties of the data representations that are learned by a neural network is still an ongoing program. These lectures discuss some of the recent perspectives and advances in this direction. 
 +==== Weeks 9 & 11 (2025) ====
  
 **Speaker**: Paul Breiding **Speaker**: Paul Breiding
  
-==== Weeks 11 & 12 (2025) ====+**Title**: Computing with Algebraic Varieties I & II 
 + 
 +**Description**: These lectures will consider algebraic varieties from a computational point of view.  They will cover topics such as dimension, degree, Gröbner bases and homotopy continuation. As a motivation, we will discuss varieties relevant in machine learning and data science. 
 +==== Weeks 12 & 13 (2025) ==== 
 + 
 +**Speaker**: Angelica Torres (MPI MIS) 
 + 
 +**Title**: Varieties in Machine Learning I & II 
 + 
 +**Description**: 
 +  * **Varieties in Machine Learning I**: In this session we will build on Paul Breiding’s lectures and focus on the relation between the implicitization problem in Algebraic Geometry and the function space of neural networks.  We will review the work by Kohn, Trager, Kileel, Montufar, Li, and others, regarding the geometric properties of the function space of linear neural networks and polynomial neural networks.  
 +  * **Varieties in Machine Learning II**: In this session we focus on the algebraic and geometric properties of the varieties studied in the previous session. We will pay special attention to the Euclidean Distance Degree and its relation with the optimization problem in Machine Learning. 
  
-**Speaker**: Angelica Torres 
  
-==== Weeks 13 (2025) ====+==== Weeks 14 (2025) ====
  
 **Speaker**: Marzieh Eidi **Speaker**: Marzieh Eidi
  • mathematical_methods_machine_learning.1735816502.txt.gz
  • Last modified: 2025/01/02 11:15
  • by Diaaeldin Taha