Deep Learning - Exercise 11

Learning goal: Get an idea of how features can be learned unsupervisedly

In a CNN, features in the convolutional layers are learned today mostly in a supervised fashion: For each input pattern there is a desired output pattern and with the help of the Backpropagation algorithm the error is propagated backwards through the MLP and the pooling layers to the neurons/filters/features in the convolutional layers where their weights/filter matrix values/feature representation is changed.

In machine learning, there is also the approach of unsupervised learning where there is only input data, but no output data, used during the learning process.

In this exercise we will learn basic image features using the unsupervised approach.

1. Hebb's rule

Hebbian theory is an approach in neuroscience which tries to explain how neuron weights are adapated in real neurons in a unsupervised learning setting.

Hebb's rule is a learning rule for weight changes which tries to transfer the idea of Hebbian theory to artificial neurons.

Download the following Visual studio project, then compile it and run it. It implements Hebb's learning rule for a single neuron.

Question: is Hebb's learning rule alone appropriate to learn a filter in an unsupervised learning setting?

2. Adapt the learning rule

First, adapt the learning rule / the project such that you can learn a single feature unsupervisedly. Then augment the project such that you can learn 100 different features unsupervisedly and visualize these features during the learning process.