Training a model in OPE Prop
The training process is where the unique algorithm of OPE Prop starts to shine. The training process is usually the longest part, as it is the main part of AI. During the training process, each unit in the layers calculates parameters that would yield the lowest error with the current model. The OPE Prop algorithm is designed with accessibility in mind, promising to be faster by 20% than gradient descent. Keep in mind, this library is just a prototype, not optimizing the algorithm to its best.
Before training a model, you need to gather a dataset. In this documentation, the dataset will be generated manually with the following code:
inputs = [] (0...1000).each do |inp1_i| (0...1000).each do |inp2_i| inputs << [inp1_i * 0.1 - 50, inp2_i * 0.1 - 50] end end answers = inputs.map {|inp| [inp[0] * 3 + inp[1] * (-4) + 2]}
The above code generates a 2-dimensional array of inputs, with numbers from -50 to 50. The answers is an array of one number, calculated by linear functions from the inputs. Make sure, that your inputs and answers arrays follow the form of being an array of arrays, meaning that each data entry (data row) needs to be an array, even if it is one item long.
As well, make sure that the number of units in the last layers is the same as the number of items in one row of the answers. So, in code, the number of units in the last layer has to be equal to answers[0].size.
After gathering a valid dataset, you can train your model with the following method:
model.train inputs, answers, epochs
Where epochs is the number of epochs, you want the model to be trained for.
After training your model is probably achieved a low error and fitted to the dataset. Now, the model predictions will probably be accurate.
You can learn about how OPE Prop works as an algorithm here. If you have any questions, or want to know more about me - check out my profile!