Lecture 'Automatic Speech Recognition'
Since Prof. Rieck cannot hold the lectures probably till the end of the semester, I will hold the remaining lectures.
The password for the slides will be given to you in the lecture.
There is a lot of Python example code available that uses neural networks to perform speech recognition. Find a simple project example and try to get it running!
In the lecture the most simple neural network model was introduced: the Perceptron. It just consists of a layer of input neurons and a layer of output neurons. There are no intermediate layers. The Perceptron model was attractive: it was quite easy to derive a formula to adjust the weights such that certain input vectors are mapped to certain output vectors.
However, the Perceptron has a severe limitation! Search in the Internet and in the literature why the Perceptron is not as powerful as a Multi-Layer-Perceptron (which has intermediate layers). What is this limitation and what is the reason for this limitation?