Author Topic: Artificial neural network and back propagation  (Read 6790 times)

0 Members and 1 Guest are viewing this topic.

Offline The Jazz Man

  • Newbie
  • Posts: 19
    • View Profile
Re: Artificial neural network and back propagation
« Reply #15 on: April 20, 2021, 02:44:20 am »
.
« Last Edit: May 17, 2021, 01:10:27 am by The Jazz Man »

Offline Dimster

  • Forum Resident
  • Posts: 500
    • View Profile
Re: Artificial neural network and back propagation
« Reply #16 on: April 20, 2021, 03:13:12 pm »
Hello again Jazz - This could be a case of the blind leading the blind but, my laymans' understanding of Back Propagation, uses the derivative of the variable you are using and then back tracking the results. So in my mind you have started with a Value then multiplied that value by a Weight which gives an outputted value. Say you started with 100 and multiplied it by a weight of 12 resulting is 1200. So the derivative is a measure of how big or little the impact was of the Weight on the starting value  100:1200 .

If you have the racing form and know the data of the horse, jockey, track conditions and the outcome of the race - and factoring all that data into an algorythm which derives to the same outcome of the race as recorded on the race form, then the secret has to be in the weight values assigned to each of the factors. By knowing the derivative of each weight (knowing how big or small an impact of each weight) you can mess with a bias value to adjust the derivation.

You have multiple hidden layers and an error code. Back Propagation, as I understand it, requires the adjustment of the derivatives starting from the last layer working back to the beginning weights. In my imagination, I see the winning horse in the final stretch, the weights applied to the horse, jockey and track conditions could amplify just in this final stretch.

I'm not sure if your hidden layers are by points of time or at what pole the horses are at, or if the hidden layers are a build of the environment in which the race is run, back propagation could also apply to the derivation of the layer itself.

I love your choice of subjects to apply an Artificial Neural Network and back Propagation. I would be interested in your understanding of back propagation and if I could be out to lunch (out of the money so to speak)

Offline The Jazz Man

  • Newbie
  • Posts: 19
    • View Profile
Re: Artificial neural network and back propagation
« Reply #17 on: April 22, 2021, 03:42:59 am »
.
« Last Edit: May 17, 2021, 01:10:39 am by The Jazz Man »

Offline Dimster

  • Forum Resident
  • Posts: 500
    • View Profile
Re: Artificial neural network and back propagation
« Reply #18 on: April 22, 2021, 09:59:12 am »
Hello again Jazz- I am the farthest person you can find with math skills of any kind but over the years I have come across those terms and have applied my own meanings to them.
- Delta - I substitute this word for Marginal Change - which is the difference between 2 numbers - for example - value 1 = 10, value 2 = 3 , this distance between them is a Marginal Change of -7 which I think in calculus would be referred to as Delta -7, but its just the change between two values either a + or a - . This is also a way of expressing a derivative but the -7 needs some context to determine if -7 is a large change in terms of values you are hoping to see or a small change.
- Omega - I have nothing to offer other than I believe it was a Commodore computer, or in a song I once heard about the Alpha, Delta and Omega ( I think it was Leonard Cohen). But as it is the last letter in the Greek alphabet, perhaps I would look at it as the very last resultant outcome of a formula.
- Theta - I have come across this one in some of my stock buying and selling. It carries a negative connotation, meaning a decreasing value over time, like the decreasing interest income value of bond over time.


A few years ago I decide I wanted to learn more about AGI (Artificial General Intelligence) - I have built a number of perceptrons with hopes of one day bringing them all together in one massive Sigmoid Neuron. I am still in the phase of learning as I go. So long story short, I would be very interested in bouncing ideas around however, as I said previously, this could be a case of the blind leading the blind.

Is the neural net you are working on an RNN? (Recurrent Neural Network) - meaning, does each layer of your network take the output of the one before it (like mixing all the ingredients to make a cake in one bowl) or are your layers separately contributing to the final output (like frying the egss first, then the bacon, then the toast, then it all appears as one breakfast)?

If you are ok with collaborating off line, perhaps I can find a way of sending you my email address. As I have said, I am not that strong in AI (yet) and I know there are a lot of really good programmer here with some very clever Game Theory AI (so much so that some of them I can't beat the computer player in their games).

Offline bplus

  • Global Moderator
  • Forum Resident
  • Posts: 8053
  • b = b + ...
    • View Profile
Re: Artificial neural network and back propagation
« Reply #19 on: April 22, 2021, 12:22:01 pm »
Derivative might be ratio of two deltas like dy/dx the change of y over change of x,  F(x) = y and d being short for delta, as dx goes to 0.

In fact add limit and I think that's it: The derivative of F(x) is Limit of dy/dx as x goes to 0.
(Let's see 2021 - 1972 = 49 years since calculus class.)

« Last Edit: April 22, 2021, 12:29:21 pm by bplus »

Offline Dimster

  • Forum Resident
  • Posts: 500
    • View Profile
Re: Artificial neural network and back propagation
« Reply #20 on: April 22, 2021, 01:28:40 pm »
And you still got it bplus. Love the formula, have one question on it -- delta y / delta x -- works up to and before x = 0 but not at zero. Whereas a negative delta does have a meaning in AI. To bridge between a positive and an negative I have simply been ignoring a division by zero and gone directly from delta x /1 to delta x /-1 . Is there a way in math to actually deal with a division by zero? Using decimal places you can approach zero but I'm finding more meaning in the results by ending positive x delta at 1 and starting negative x delta at -1.  Crazy thing about neural networks is accuracy produces the best results but it's always those flaws in the formulas which come up with some very interesting insights.

Offline bplus

  • Global Moderator
  • Forum Resident
  • Posts: 8053
  • b = b + ...
    • View Profile
Re: Artificial neural network and back propagation
« Reply #21 on: April 22, 2021, 02:54:18 pm »
oops! I think I meant as dx goes to 0, as the change in x = dx becomes nothing, certainly not as x goes to 0, sorry.

I think you might be able to follow this for F(x) = x^2


Offline Dimster

  • Forum Resident
  • Posts: 500
    • View Profile
Re: Artificial neural network and back propagation
« Reply #22 on: April 22, 2021, 04:48:23 pm »
Quite a bit to decipher in the steps taken from f(x)= x^2 to get to the resultant 2x .  I think the challenge in porting calculus terms and formulas over to neural nets is in the jargon or seeing the math in layman's language or basic language. Especially when calculus is not the level of math you are familiar with.

Offline The Jazz Man

  • Newbie
  • Posts: 19
    • View Profile
Re: Artificial neural network and back propagation
« Reply #23 on: April 23, 2021, 12:30:50 am »
.
« Last Edit: May 17, 2021, 01:11:03 am by The Jazz Man »

Offline The Jazz Man

  • Newbie
  • Posts: 19
    • View Profile
Re: Artificial neural network and back propagation
« Reply #24 on: April 23, 2021, 06:51:56 am »
.
« Last Edit: May 17, 2021, 01:16:20 am by The Jazz Man »

Offline The Jazz Man

  • Newbie
  • Posts: 19
    • View Profile
Re: Artificial neural network and back propagation
« Reply #25 on: April 23, 2021, 09:02:12 am »
.
« Last Edit: May 17, 2021, 01:11:37 am by The Jazz Man »

Offline Dimster

  • Forum Resident
  • Posts: 500
    • View Profile
Re: Artificial neural network and back propagation
« Reply #26 on: April 23, 2021, 10:02:20 am »
So Jazz Man - here is my understanding of Back Propogation in an RNN : Please forgive the simplicity of the layout
 The WEIGHTS - Horse: 10 = Excellent, 8 = Good, 5 = Average, <5 = Also run ..............................Input 1 (i1)
                      - Jockey: 10 = Most Wins, 8 = Top 10, 5= Top 100 - Top 10, <5 = Newbie.................Input 2 (i2)
                      - Gate : 10 = Ideal for the rail, 8 = Average for the rail, 5 = Farthest from the rail......Layer 1 (L1)
                      - Track : 10 = Short, 8 = Medium, 5 = Long Track ..................................................Layer 2 (L2)
                      - Track Conditions : 10 = Dry, 8 = High Humidity, 5 = muddy...................................Layer 3 (L3)
                      - Energy: 10 = in the stretch, 8 = at the post, 5 = on the track.

So the Perfect Race =    (((((i1+i2)*L1)*L2)*L3)*L4) = 200,000

Back Propagation would help to zero in on lets say the energy (in this case, for energy,  I mean both the horse and jockeys effort being expended which of course can be separate layers). In the 200,000 calculation of the perfect race I have only used a value for the stretch drive energy but actually that doesn't account for the various outputs of energy at different distances on the track. In my understanding of back propagation it is simply the subtraction of L4. You do get the same math result if you sum all except L4 and subtract that sum from the 200,000 but your AI will work better (or maybe more accurately) by reversing out the layers than summing and subtracting from original total.

Combined with your error code and examining the deviations within the layers you can tinker with the weights until you come up with the right combination that predicts the past result of the horse in your racing forms whether won, place or showed.

A little while ago I did have a discussion with forum members here about how they approached a BIAS in terms of their AI algorythms - is a Bias value generated by a math formula or is a Bias just a value you have an inclination to apply. For example, if you were to look at the horses and determine how many hands high, would you consider a bias value for the taller horses? If so, would you use a specific measurement for the bias or might you just look at their record and just consider it's likely a Seabiscuit, small but fast and full of heart and just put your own bias value to it.
                     

Offline The Jazz Man

  • Newbie
  • Posts: 19
    • View Profile
Re: Artificial neural network and back propagation
« Reply #27 on: April 23, 2021, 07:34:08 pm »
.
« Last Edit: May 17, 2021, 01:11:52 am by The Jazz Man »

Offline The Jazz Man

  • Newbie
  • Posts: 19
    • View Profile
Re: Artificial neural network and back propagation
« Reply #28 on: April 24, 2021, 01:16:51 am »
.
« Last Edit: May 17, 2021, 01:12:04 am by The Jazz Man »

Offline Dimster

  • Forum Resident
  • Posts: 500
    • View Profile
Re: Artificial neural network and back propagation
« Reply #29 on: April 24, 2021, 08:11:21 am »
Thanks Jazz Man for that clarification on the scope of your neural net. The code I was looking at seemed to fill 20 Hidden layers with 100 inputted values, which implied to me static hidden layers. Each layer of an RNN does need to be dynamic in order for back propagation to work and clearly from your recent post there is a lot more to your hidden layer coding that I was unaware of.

I do understand the difference between an input and a weight, I apologize for the over simplification of my example.

I am curious about how you back out the effects of the application of the Sigmoid function in your back propagation routine?