C1_W3.pdf

Neural Network Representation

Untitled

We don’t call the Input Layer as an official layer of the NN

Notation:

$w^{[i]}_{j}$ Refers to the COLUMN VECTORS of weights used on the ith LAYER and jth node of the ith layer

$b^{[i]}_{j}$ Refers to the bias used on the ith LAYER and jth node of the ith layer

$a^{[i]}_{j}$ Refers to the output computation of the ith LAYER and jth node of the ith layer

$a^{[i] (j)} _ {k}$ Refers to the ith layer, 4th number of example kth neuron

Note:

$$ z_{1}^{[1]} = z_{1}^{[1]T}x + b_{1}^{[1]}\\ then \\ a_{1}^{[1]} = \sigma(z_{1}^{[1]}) $$

Computing a Neural Network’s Output

Untitled

But we’re going to use the Vectorized representation to fasten the computation

Untitled

We construct $W^{[1]}$ stacking the parameter vectors $w^{[1]}_j$of all the neurons of the first layer.

So is a matrix with rows equal to the transpose of the parameter vectors of the first layer

$W^{[j]}$And has shape (number of neurons, number of inputs)

$b^{j}$ Has shape (number of neurosn, 1)

$A^{[j]}$ Has shape (number of neuros,1)