• Keine Ergebnisse gefunden

Machine Learning

N/A
N/A
Protected

Academic year: 2022

Aktie "Machine Learning"

Copied!
5
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Machine Learning

Introduction

(2)

Prerequisites: Math

One should be able at least to guess, what does it mean.

Examples:

In particular: linear algebra (vectors, matrices, SVD, scalar products), a bit geometry, functions (derivative, gradients, integrals, series), optimization, probability theory …

(3)

Topics

1. Probability theory: probabilistic inference and learning (3 DS) 2. Discriminative learning (1 DS)

3. Neurons and neuronal networks: simple linear classifiers,

complex classifiers by combination, basic algorithms, learning, clustering (2 DS)

4. Support Vector Machines: linear classifiers again, complex

classifiers by generalization, kernels, a bit of statistical learning theory, optimization techniques (3 DS)

5. Decision trees, regression trees (1 DS)

6. Introduction to graphical models, MRF-s (1-2 DS)

(4)

Seminars

• 2 Groups, Thursday 4+5 DS. Please, partition you by yourself

• Practical assignments (no computers, on the board) – lectures supplement

• Assignments pair of days before on the page

• Homework !!!

• Credits: active participation is assessed – points during the semester, optional – written test

Exam: oral (graded), with seminars – 4SWS, without – 2SWS

(5)

Miscellaneous

• Scripts, slides (quite chaotic at the moment), info etc.

http://www1.inf.tu-dresden.de/~ds24/lehre/ml_ws_2013/ml_ws_2013.html

• Literature:

Christopher M. Bishop: „Pattern Recognition and Machine Learning“

(practically all the stuff)

Michail I. Schlesinger, Václav Hlavác: „Ten Lectures on Statistical and Structural Pattern Recognition“ (especially statistical PR)

During the semester – Papers (see www1.inf...) for SVM-s, Neuronal Networks etc.

• Comments, requests, questions, criticism are welcome (anonym via mail-form as well).

Referenzen

ÄHNLICHE DOKUMENTE

In particular: linear algebra (vectors, matrices, SVD, scalar products), a bit geometry, functions (derivative, gradients, integrals, series),3. optimization, probability

The approach – minimize the average (expected) loss – Simple δ-loss – Maximum A-posteriori Decision – Rejection – an example of "extended" decision sets

Billingsley, Probability and Measure, Wiley, New York, first edition 1979, third edition 1995.. Elstrodt, Maß- und Integrationstheorie, Springer, Berlin, first edition 1996,

This text is a summary of the lecture on Probability Theory held at the TU Darmstadt in Winter Term 2007/08.. Please email all misprints and

This text is a summary of the lecture on Probability Theory held at the TU Darmstadt in Winter Term 2007/08.. Please email all misprints and

and observe that male students mainly applied at faculties with a high probability of admission, whereas female students mainly applied at faculties with a low probability

The Radon-Nikodym theorem will be used to obtain a second, independent proof for the existence of the conditional expectation. We will prove the theorem in the case of

How can we define (non-discrete) probability measures on these spaces in order to model the random dynamics of stock prices. Analogously for random perturbations in physics,