2024-10-31 16:45:15 +01:00

12 lines
882 B
TeX

The network training uses a learning loop that iterates through the defined epochs (for example, \texttt{numEpochs = 1000000}). Each epoch begins by randomly shuffling the order of the training sets using the \texttt{shuffle} function:
\begin{verbatim}
shuffle(trainingSetOrder, NUM_TRAINING_SETS);
\end{verbatim}
For each training example, the network performs a forward pass, then applies backpropagation to adjust weights and biases based on the error. Once training is complete, the final weights can be saved to a file using the \texttt{backup\_weights} function.
The output results are displayed at each training step, allowing visualization of the final values of weights and biases.
\begin{figure}[H]
\caption{Example output of the training of the XOR neural network.}
\includegraphics[scale=0.5]{sections/partie-technique/IA/entrainement/ia-train-demo.png}
\end{figure}