OPE Prop for the Hidden Layer
OPE Prop provides a basic backpropagation algorithm for the hidden layer. It is probably not the most efficient, but works and is sometimes faster than gradient descent. The OPE Prop algorithm for the hidden layer might be improved and reworked in the future.
The Main Algorithm
For ease of use, the current OPE Prop algorithm for hidden layers uses the same formulas as for the output layer. The trick is that we calculate the needed outputs using the next layer and the previous outputs of this layer, and then apply the same backpropagation algorithm as for the output layer.
So, to calculate the needed outputs for the previous layer, assuming that:
Now, to calculate the needed output for one unit, labeled as "or", we repeat this algorithm:
1. Create an empty vector with the number of units of the previous layer as the size.
2. Go through each item, following this algorithm:
g = (a1 + a2 + ...) - (b1 + b2 + ...)
k = w1r + w2r + ...
p = wh1o1 + wh2o2 + ...
t = wj1i1 + wj2i2 + ...
or = ( g - (p + t) ) / k
Which in Ruby code looks like this: (part of code from OPE Prop Library)
inputs.size.times do |inp_i|
inputs[inp_i] = answers.zip(@units).map { |ans, unit| ans - unit["b"][0] - (0...inputs.size).zip(inputs, unit["w"]).map {|w_i, i, w| (w_i != inp_i) ? i * w : 0 }.sum }.sum / (@units.map{|unit| unit["w"][inp_i]}.sum)
end
Now, to find the best fitting weights for the previous layer, use the same algorithm as the output layer, but instead of the vector a use the vector o.
This is the end! Other error functions and activation functions are going to be implemented soon! If you have any questions about using this algorithm, licensing or you want to contribute to this project, you can reach out to me on my portfolio or on email at kyryloshy@gmail.com. Thank you for your time!
OPE Prop formulas on this website are licensed under the CC BY-SA 4.0 License. More details here