19 lines
1.1 KiB
TeX
19 lines
1.1 KiB
TeX
The network uses a \textit{feedforward} process to compute the activations of the hidden and output layers. Each neuron in the hidden layer applies the sigmoid activation function to the weighted sum of the inputs:
|
|
\begin{verbatim}
|
|
for (int j = 0; j < NUM_HIDDEN; j++) {
|
|
double activation = hiddenLayerBias[j];
|
|
for (int k = 0; k < NUM_INPUTS; k++) {
|
|
activation += trainingInputs[i][k] * hiddenWeights[k][j];
|
|
}
|
|
hiddenLayer[j] = sigmoid(activation);
|
|
}
|
|
\end{verbatim}
|
|
The same logic is applied in the output layer, using the activations of the hidden layer as inputs.
|
|
|
|
Backpropagation adjusts the weights based on the calculated errors. First, the output error is calculated relative to the expected output and the derivative of the sigmoid function:
|
|
\begin{verbatim}
|
|
double error = (trainingOutputs[i][j] - outputLayer[j]);
|
|
deltaOutput[j] = error * sigmoid_derivative(outputLayer[j]);
|
|
\end{verbatim}
|
|
The error for each hidden neuron is then calculated based on the errors of the connected output neurons. The weights and biases are adjusted proportionally to the error and the learning rate \texttt{lr}.
|