MLP standsq for multi-layer perceptron. It is the most basic kind of artificial neural network. It is based on layers containing neurons. Each neurons of a layer is connected to all the neurons of the next layers. The first layer is called the input layer because it takes as input the input data. The last layer is called the output layer. In case of a regression, it will contain as much neurons as there are values to regress. In case of classification, it contains as much neurons as there are categories. All the other layers are called the hidden layers.
Each neuron takes a fixed number of input n, imposed by the geometry. The formula of the neuron computation is \sigma(\sum_{i=0}^{n} w_i x_i) where
- \sigma is a non-linear function. Most often used are sigmoid, hyperbolic tangent or Rectified Linear Unit.
- w_i is a weigh associated to input i
- x_i is the ith input
The code for such a neuron is implemented in every AI library like Pytorch or Tensorflow/Keras.