QB64.org Forum

Active Forums => QB64 Discussion => Topic started by: The Jazz Man on April 06, 2021, 06:29:16 pm

Title: Artificial neural network and back propagation
Post by: The Jazz Man on April 06, 2021, 06:29:16 pm
.
Title: Re: Artificial neural network and back propagation
Post by: bplus on April 06, 2021, 06:42:49 pm
Looks good, if have trouble try -1 * x as I had to with my own plot program:

compare: https://en.wikipedia.org/wiki/Sigmoid_function

to

  [ This attachment cannot be displayed inline in 'Print Page' view ]  


Welcome to forum @The Jazz Man
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 06, 2021, 07:03:38 pm
.
Title: Re: Artificial neural network and back propagation
Post by: bplus on April 06, 2021, 07:11:10 pm
I put your code up in IDE, immediately it red lines GOTO 453 on line 203.

Are all those arrays you are using 10 items or less? or maybe you will need to DIM a few arrays?

Also I don't see any values for variables for top limits of your For Loops.
Title: Re: Artificial neural network and back propagation
Post by: bplus on April 06, 2021, 07:22:06 pm
OH hey are you translating a Morristown NJ program?

Programs written before there was an ELSE for IF... THEN...

They had to use GOTO every time you turn a corner! LOL

AND ALWAYS ALL CAPITALS :)
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 06, 2021, 07:39:08 pm
.
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 06, 2021, 07:43:25 pm
.
Title: Re: Artificial neural network and back propagation
Post by: bplus on April 06, 2021, 08:31:19 pm
Well good luck here is my little foray into Neural Nets, pattern recognition:
Code: QB64: [Select]
  1. _Title "NN 1" 'by B+ started 2018-09-08 my first attempt with training a Neural Net
  2.  
  3. '2018-09-09 retest this updating the Bias Weight with the others, though I suspect it will stir up a storm?
  4.  
  5. Const WW = 800
  6. Const WH = 620
  7. Screen _NewImage(WW, WH, 32)
  8. _ScreenMove 300, 60
  9.  
  10. 'need a place to store inputs x, y, B for bias
  11. 'well just use loops
  12. 'dim and init weights
  13. Const LR = .2 'Learning Rate don't overcorrect errors
  14. Dim Shared BiasWeight As Single, Bias As Integer, ArrayStart As Integer, ArrayEnd As Integer, SQ As Integer, Mode As Integer
  15. Dim i As Integer, rx As Integer, ry As Integer, colr As Long, cnt As Integer, x As Integer, y As Integer, correct As Single
  16. Mode = 1
  17. ArrayStart = 1 'Start range of x, y
  18. ArrayEnd = 60 'End range of x, y
  19. SQ = 10 'for graphic squares drawing to watch Perceptron Learn Y > X
  20.  
  21. 'setup weights
  22. Dim Shared WX(ArrayStart To ArrayEnd) As Single, WY(ArrayStart To ArrayEnd) As Single
  23. Bias = 1
  24. BiasWeight = Rnd * 2 - 1
  25. For i = ArrayStart To ArrayEnd
  26.     WX(i) = Rnd * 2 - 1
  27.     WY(i) = Rnd * 2 - 1
  28.  
  29. '61 x 61 squares of side 10 pixels fit in 610 x 610 pixel area
  30. 'train 1000 random points per frame 10 frames per second and show progress of learning hopefully
  31. While _KeyDown(27) = 0
  32.     Cls
  33.     If Mode = 1 Then _Title "Training Pattern #1: Y >= X" Else _Title "Training Pattern #2: Framed!"
  34.     cnt = 0
  35.     For i = 1 To 1000 'train 10% of points at a time
  36.         rx = Int(Rnd * ArrayEnd) + 1: ry = Int(Rnd * ArrayEnd) + 1 '60 x 60 field so can show circles of radius 10
  37.         Train rx, ry
  38.     Next
  39.     For y = ArrayStart To ArrayEnd
  40.         For x = ArrayStart To ArrayEnd
  41.             If Mode = 1 Then
  42.                 If TrainThis%(x, y) = -1 Then Line (x * SQ, y * SQ)-Step(SQ, SQ), _RGB32(0, 0, 255), B
  43.                 If TrainThis%(x, y) = 1 Then Line (x * SQ, y * SQ)-Step(SQ, SQ), _RGB32(255, 255, 255), B
  44.                 If TrainThis%(x, y) - Perceptron%(x, y) Then 'wrong! Perceptron
  45.                     colr = _RGB32(255, 0, 0)
  46.                 Else 'good job Perceptron!
  47.                     colr = _RGB32(0, 200, 0): cnt = cnt + 1
  48.                 End If
  49.             Else
  50.                 If TrainThat%(x, y) = -1 Then Line (x * SQ, y * SQ)-Step(SQ, SQ), _RGB32(0, 0, 255), B
  51.                 If TrainThat%(x, y) = 1 Then Line (x * SQ, y * SQ)-Step(SQ, SQ), _RGB32(255, 255, 255), B
  52.                 If TrainThat%(x, y) - Perceptron%(x, y) Then 'wrong! Perceptron
  53.                     colr = _RGB32(255, 0, 0)
  54.                 Else 'good job Perceptron!
  55.                     colr = _RGB32(0, 200, 0): cnt = cnt + 1
  56.                 End If
  57.             End If
  58.             Line (x * SQ + 2, y * SQ + 2)-Step(SQ - 3, SQ - 3), colr, BF
  59.         Next
  60.     Next
  61.     correct = Int(cnt * 10000 / (61 * 61)) / 100
  62.     _PrintString (650, 10), "Correct:" + Str$(correct) + "%"
  63.     If Mode = 1 Then
  64.         _PrintString (640, 30), "White Frame Y >= X"
  65.         _PrintString (640, 50), " Blue Frame Y <  X"
  66.     Else
  67.         _PrintString (620, 30), "White Frame Train +1"
  68.         _PrintString (620, 50), " Blue Frame Train -1"
  69.         _PrintString (640, 180), "Wait for it... ;)"
  70.     End If
  71.     _PrintString (640, 70), "Green Fill Correct"
  72.     _PrintString (640, 90), "Red Fill Incorrect"
  73.     _PrintString (620, 130), "Training 1000 random"
  74.     _PrintString (620, 146), "points in 10th second."
  75.     If correct > 95.51 And Mode = 1 Then
  76.         Mode = 2
  77.         mBox "Hmm... there seems to be a real battle at the border. OK 95.5% has been exceeded, let's see how fast this retrains to a new pattern...", "OK Now Test New Training Set!"
  78.     ElseIf Mode = 2 And correct > 96.5 Then
  79.         mBox "Over a 96.5% chance you're pregnant!", "OMG!"
  80.         System
  81.     End If
  82.     _Display
  83.     _Limit 10
  84.  
  85. Function Perceptron% (x As Integer, y As Integer)
  86.     Dim sum As Single
  87.     sum = x * WX(x) + y * WY(y) + Bias * BiasWeight 'sum the inputs times weight
  88.     Perceptron% = Sign%(sum) 'apply the activation function to the sum for output
  89.  
  90. Function Sign% (n As Single) 'very simple activation function
  91.     If n < 0 Then Sign% = -1 Else Sign% = 1
  92.  
  93. 'this sub trains one randomly chosen Perceptron weight set
  94. Sub Train (rx As Integer, ry As Integer)
  95.     'adjust Perceptrons weights until get good results
  96.     '1. provide Perceptron with Inputs for which the correct answer is known
  97.     '2. have perceptron guess the answer
  98.     '3. Compute the error
  99.     '4. Adjust weights according to error
  100.     '5. repeat until shaped up
  101.  
  102.     'so what are we going to train , oh there it is
  103.     Dim Guess As Integer, Correct As Integer, Errror As Single, OK$
  104.     Guess = Perceptron%(rx, ry)
  105.     If Mode = 1 Then
  106.         Correct = TrainThis%(rx, ry)
  107.     Else
  108.         Correct = TrainThat%(rx, ry)
  109.     End If
  110.     Errror = Correct - Guess 'either 0 when Guess = Correct  -2 or 2 when not
  111.     If Errror Then
  112.         WX(rx) = WX(rx) + rx * Errror * LR
  113.         WY(ry) = WY(ry) + ry * Errror * LR
  114.         BiasWeight = BiasWeight + 1 * Errror * LR
  115.     End If
  116.  
  117. Function TrainThis% (x As Integer, y As Integer)
  118.     If y >= x Then TrainThis% = 1 Else TrainThis% = -1 'the x = y function is the line between true and false
  119.  
  120. Function TrainThat% (x As Integer, y As Integer) 'draw a frame
  121.     If x = ArrayStart Or x = ArrayStart + 1 Or x = ArrayEnd - 1 Or x = ArrayEnd Then
  122.         TrainThat% = 1
  123.     ElseIf y = ArrayStart Or y = ArrayStart + 1 Or y = ArrayEnd - 1 Or y = ArrayEnd Then
  124.         TrainThat% = 1
  125.     ElseIf x >= 29 And x <= 32 And y >= 12 And y <= 49 Then
  126.         TrainThat% = 1
  127.     ElseIf y >= 29 And y <= 32 And x >= 12 And x <= 49 Then
  128.         TrainThat% = 1
  129.     Else
  130.         TrainThat% = -1
  131.     End If
  132.  
  133. 'title$ limit is 57 chars, all lines are 58 chars max
  134. ' version bak 2018-09-07_10P
  135. Sub mBox (m$, title$)
  136.  
  137.     'first screen dimensions items to restore at exit
  138.     Dim curScrn As Long, backScrn As Long, mbx As Long 'some handles
  139.     Dim ti As Integer, limit As Integer 'ti = text index for t$(), limit is number of chars per line
  140.     Dim i As Integer, j As Integer, ff As _Bit, add As _Byte 'index, flag and
  141.     Dim bxH As Integer, bxW As Integer 'first as cells then as pixels
  142.     Dim mb As Integer, mx As Integer, my As Integer, mi As Integer, grabx As Integer, graby As Integer
  143.     Dim tlx As Integer, tly As Integer 'top left corner of message box
  144.     Dim lastx As Integer, lasty As Integer, r As Integer
  145.     Dim b$, c$, tail$, d$
  146.     sw = _Width
  147.     sh = _Height
  148.     fg = _DefaultColor
  149.     bg = _BackgroundColor
  150.     'screen snapshot
  151.     curScrn = _Dest
  152.     backScrn = _NewImage(sw, sh, 32)
  153.     _PutImage , curScrn, backScrn
  154.  
  155.     'setup t$() to store strings with ti as index, linit 58 chars per line max, b$ is for build
  156.     ReDim t$(0): ti = 0: limit = 58: b$ = ""
  157.     For i = 1 To Len(m$)
  158.         c$ = Mid$(m$, i, 1)
  159.         'are there any new line signals, CR, LF or both? take CRLF or LFCR as one break but dbl LF or CR means blank line
  160.         Select Case c$
  161.             Case Chr$(13) 'load line
  162.                 If Mid$(m$, i + 1, 1) = Chr$(10) Then i = i + 1
  163.                 t$(ti) = b$: b$ = "": ti = ti + 1: ReDim _Preserve t$(ti)
  164.             Case Chr$(10)
  165.                 If Mid$(m$, i + 1, 1) = Chr$(13) Then i = i + 1
  166.                 t$(ti) = b$: b$ = "": ti = ti + 1: ReDim _Preserve t$(ti)
  167.             Case Else
  168.                 If c$ = Chr$(9) Then c$ = Space$(4): add = 4 Else add = 1
  169.                 If Len(b$) + add > limit Then
  170.                     tail$ = "": ff = 0
  171.                     For j = Len(b$) To 1 Step -1 'backup until find a space, save the tail end for next line
  172.                         d$ = Mid$(b$, j, 1)
  173.                         If d$ = " " Then
  174.                             t$(ti) = Mid$(b$, 1, j - 1): b$ = tail$ + c$: ti = ti + 1: ReDim _Preserve t$(ti)
  175.                             ff = 1 'found space flag
  176.                             Exit For
  177.                         Else
  178.                             tail$ = d$ + tail$ 'the tail grows!
  179.                         End If
  180.                     Next
  181.                     If ff = 0 Then 'no break? OK
  182.                         t$(ti) = b$: b$ = c$: ti = ti + 1: ReDim _Preserve t$(ti)
  183.                     End If
  184.                 Else
  185.                     b$ = b$ + c$ 'just keep building the line
  186.                 End If
  187.         End Select
  188.     Next
  189.     t$(ti) = b$
  190.     bxH = ti + 3: bxW = limit + 2
  191.  
  192.     'draw message box
  193.     mbx = _NewImage(60 * 8, (bxH + 1) * 16, 32)
  194.     _Dest mbx
  195.     Color _RGB32(60, 40, 25), _RGB32(225, 225, 255)
  196.     Locate 1, 1: Print Left$(Space$((bxW - Len(title$) - 3) / 2) + title$ + Space$(bxW), bxW)
  197.     Color _RGB32(225, 225, 255), _RGB32(200, 0, 0)
  198.     Locate 1, bxW - 2: Print " X "
  199.     Color _RGB32(60, 40, 25), _RGB32(255, 160, 90)
  200.     Locate 2, 1: Print Space$(bxW);
  201.     For r = 0 To ti
  202.         Locate 1 + r + 2, 1: Print Left$(" " + t$(r) + Space$(bxW), bxW);
  203.     Next
  204.     Locate 1 + bxH, 1: Print Space$(limit + 2);
  205.  
  206.     'now for the action
  207.     _Dest curScrn
  208.  
  209.     'convert to pixels the top left corner of box at moment
  210.     bxW = bxW * 8: bxH = bxH * 16
  211.     tlx = (sw - bxW) / 2: tly = (sh - bxH) / 2
  212.     lastx = tlx: lasty = tly
  213.     'now allow user to move it around or just read it
  214.     While _KeyDown(27) = 0 And _KeyDown(13) = 0 And _KeyDown(32) = 0
  215.         Cls
  216.         _PutImage , backScrn
  217.         _PutImage (tlx, tly), mbx, curScrn
  218.         _Display
  219.         While _MouseInput: Wend
  220.         mx = _MouseX: my = _MouseY: mb = _MouseButton(1)
  221.         If mb Then
  222.             If mx >= tlx And mx <= tlx + bxW And my >= tly And my <= tly + 16 Then 'mouse down on title bar
  223.                 If mx >= tlx + bxW - 24 Then Exit While
  224.                 grabx = mx - tlx: graby = my - tly
  225.                 Do While mb 'wait for release
  226.                     mi = _MouseInput: mb = _MouseButton(1)
  227.                     mx = _MouseX: my = _MouseY
  228.                     If mx - grabx >= 0 And mx - grabx <= sw - bxW And my - graby >= 0 And my - graby <= sh - bxH Then
  229.                         'attempt to speed up with less updates
  230.                         If ((lastx - (mx - grabx)) ^ 2 + (lasty - (my - graby)) ^ 2) ^ .5 > 10 Then
  231.                             tlx = mx - grabx: tly = my - graby
  232.                             Cls
  233.                             _PutImage , backScrn
  234.                             _PutImage (tlx, tly), mbx, curScrn
  235.                             lastx = tlx: lasty = tly
  236.                             _Display
  237.                         End If
  238.                     End If
  239.                     _Limit 400
  240.                 Loop
  241.             End If
  242.         End If
  243.         _Limit 400
  244.     Wend
  245.     'put things back
  246.     Color _RGB32(255, 255, 255), _RGB32(0, 0, 0): Cls
  247.     _PutImage , backScrn
  248.     _Display
  249.     Color fg, bg
  250.     _FreeImage backScrn
  251.     _FreeImage mbx
  252.     _KeyClear
  253.  
  254.  
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 06, 2021, 09:29:18 pm
.
Title: Re: Artificial neural network and back propagation
Post by: bplus on April 07, 2021, 12:47:45 am
@bplus
cheers for the example Bplus.
it appears that you are a much more advance programmer than me, it will take me a while to wrap my brain around this.
thank you for trying to help.
cheers

Yes there is a fine mBox (message box) subroutine in there, but the results of my experiment with neural nets was dismal (the code originally attempted digit recognition) but hated to see all that code go to waste so I tried for a funny. :)
Title: Re: Artificial neural network and back propagation
Post by: Dimster on April 07, 2021, 10:50:58 am
Hello Jazz Man

May I ask a few questions on the overall flow of your program? No problem if you'd prefer not to go into anything that you may wish not to get into.

- I gather the user of your game would not only pick which horse will win but also is there $$$$ bet which determines an ultimate value of money won?
- It would appear the program is a horse race of 3 horses(?), the winner is ??? the one which won the race or the one with the most value or prize money won by the person who bet??
- If the horse winning and value of the win are the same thing, then is the "Back Propagation" and Leaning Rate directed at improving the results of the two losing horses and higher winnings?
- Does each horse start the race with an equal weighting, then in hidden layer 2 a random weight is placed on each of them???
- Hidden Layer 3, The Error Layer, is determining if the horse picked to win did in fact win, or is it monitoring a pattern in the random values?

I apologize in advance for these naive questions. I am interested in AI and have absolutely zero back ground in either programming or calculus. I've been to a horse race track only once in my life and I did bet and , beginners luck, I did bet on the right horse to win. But there was one guy in our group who made more money on the horse that came in third than I did for the win.

I have tried to follow your code for the calculation and application of Back Propagation and Leaning Rate. Both of these, I believe are triggered by the Error but not sure of that or what actually constitutes an error. 

I'll be following this tread - who knows you may find a real life application.
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 17, 2021, 06:42:19 am
.
Title: Re: Artificial neural network and back propagation
Post by: bplus on April 17, 2021, 04:54:10 pm
Quote
and I hope that somebody will help me out with my plain math calculus request.

@The Jazz Man
What?
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 19, 2021, 09:47:56 am
.
Title: Re: Artificial neural network and back propagation
Post by: bplus on April 19, 2021, 12:02:20 pm
Well all I can say for sure is the formula is correct as per my first reply and I my attempts with neural nets had not been satisfactory either.

You keep referring to math or calculus, it's really a matter of getting the algorithm for the training the neural net correct, more a computer science problem than a math one though Logic is foundation of both.
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 20, 2021, 02:44:20 am
.
Title: Re: Artificial neural network and back propagation
Post by: Dimster on April 20, 2021, 03:13:12 pm
Hello again Jazz - This could be a case of the blind leading the blind but, my laymans' understanding of Back Propagation, uses the derivative of the variable you are using and then back tracking the results. So in my mind you have started with a Value then multiplied that value by a Weight which gives an outputted value. Say you started with 100 and multiplied it by a weight of 12 resulting is 1200. So the derivative is a measure of how big or little the impact was of the Weight on the starting value  100:1200 .

If you have the racing form and know the data of the horse, jockey, track conditions and the outcome of the race - and factoring all that data into an algorythm which derives to the same outcome of the race as recorded on the race form, then the secret has to be in the weight values assigned to each of the factors. By knowing the derivative of each weight (knowing how big or small an impact of each weight) you can mess with a bias value to adjust the derivation.

You have multiple hidden layers and an error code. Back Propagation, as I understand it, requires the adjustment of the derivatives starting from the last layer working back to the beginning weights. In my imagination, I see the winning horse in the final stretch, the weights applied to the horse, jockey and track conditions could amplify just in this final stretch.

I'm not sure if your hidden layers are by points of time or at what pole the horses are at, or if the hidden layers are a build of the environment in which the race is run, back propagation could also apply to the derivation of the layer itself.

I love your choice of subjects to apply an Artificial Neural Network and back Propagation. I would be interested in your understanding of back propagation and if I could be out to lunch (out of the money so to speak)
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 22, 2021, 03:42:59 am
.
Title: Re: Artificial neural network and back propagation
Post by: Dimster on April 22, 2021, 09:59:12 am
Hello again Jazz- I am the farthest person you can find with math skills of any kind but over the years I have come across those terms and have applied my own meanings to them.
- Delta - I substitute this word for Marginal Change - which is the difference between 2 numbers - for example - value 1 = 10, value 2 = 3 , this distance between them is a Marginal Change of -7 which I think in calculus would be referred to as Delta -7, but its just the change between two values either a + or a - . This is also a way of expressing a derivative but the -7 needs some context to determine if -7 is a large change in terms of values you are hoping to see or a small change.
- Omega - I have nothing to offer other than I believe it was a Commodore computer, or in a song I once heard about the Alpha, Delta and Omega ( I think it was Leonard Cohen). But as it is the last letter in the Greek alphabet, perhaps I would look at it as the very last resultant outcome of a formula.
- Theta - I have come across this one in some of my stock buying and selling. It carries a negative connotation, meaning a decreasing value over time, like the decreasing interest income value of bond over time.


A few years ago I decide I wanted to learn more about AGI (Artificial General Intelligence) - I have built a number of perceptrons with hopes of one day bringing them all together in one massive Sigmoid Neuron. I am still in the phase of learning as I go. So long story short, I would be very interested in bouncing ideas around however, as I said previously, this could be a case of the blind leading the blind.

Is the neural net you are working on an RNN? (Recurrent Neural Network) - meaning, does each layer of your network take the output of the one before it (like mixing all the ingredients to make a cake in one bowl) or are your layers separately contributing to the final output (like frying the egss first, then the bacon, then the toast, then it all appears as one breakfast)?

If you are ok with collaborating off line, perhaps I can find a way of sending you my email address. As I have said, I am not that strong in AI (yet) and I know there are a lot of really good programmer here with some very clever Game Theory AI (so much so that some of them I can't beat the computer player in their games).
Title: Re: Artificial neural network and back propagation
Post by: bplus on April 22, 2021, 12:22:01 pm
Derivative might be ratio of two deltas like dy/dx the change of y over change of x,  F(x) = y and d being short for delta, as dx goes to 0.

In fact add limit and I think that's it: The derivative of F(x) is Limit of dy/dx as x goes to 0.
(Let's see 2021 - 1972 = 49 years since calculus class.)

Title: Re: Artificial neural network and back propagation
Post by: Dimster on April 22, 2021, 01:28:40 pm
And you still got it bplus. Love the formula, have one question on it -- delta y / delta x -- works up to and before x = 0 but not at zero. Whereas a negative delta does have a meaning in AI. To bridge between a positive and an negative I have simply been ignoring a division by zero and gone directly from delta x /1 to delta x /-1 . Is there a way in math to actually deal with a division by zero? Using decimal places you can approach zero but I'm finding more meaning in the results by ending positive x delta at 1 and starting negative x delta at -1.  Crazy thing about neural networks is accuracy produces the best results but it's always those flaws in the formulas which come up with some very interesting insights.
Title: Re: Artificial neural network and back propagation
Post by: bplus on April 22, 2021, 02:54:18 pm
oops! I think I meant as dx goes to 0, as the change in x = dx becomes nothing, certainly not as x goes to 0, sorry.

I think you might be able to follow this for F(x) = x^2

Title: Re: Artificial neural network and back propagation
Post by: Dimster on April 22, 2021, 04:48:23 pm
Quite a bit to decipher in the steps taken from f(x)= x^2 to get to the resultant 2x .  I think the challenge in porting calculus terms and formulas over to neural nets is in the jargon or seeing the math in layman's language or basic language. Especially when calculus is not the level of math you are familiar with.
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 23, 2021, 12:30:50 am
.
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 23, 2021, 06:51:56 am
.
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 23, 2021, 09:02:12 am
.
Title: Re: Artificial neural network and back propagation
Post by: Dimster on April 23, 2021, 10:02:20 am
So Jazz Man - here is my understanding of Back Propogation in an RNN : Please forgive the simplicity of the layout
 The WEIGHTS - Horse: 10 = Excellent, 8 = Good, 5 = Average, <5 = Also run ..............................Input 1 (i1)
                      - Jockey: 10 = Most Wins, 8 = Top 10, 5= Top 100 - Top 10, <5 = Newbie.................Input 2 (i2)
                      - Gate : 10 = Ideal for the rail, 8 = Average for the rail, 5 = Farthest from the rail......Layer 1 (L1)
                      - Track : 10 = Short, 8 = Medium, 5 = Long Track ..................................................Layer 2 (L2)
                      - Track Conditions : 10 = Dry, 8 = High Humidity, 5 = muddy...................................Layer 3 (L3)
                      - Energy: 10 = in the stretch, 8 = at the post, 5 = on the track.

So the Perfect Race =    (((((i1+i2)*L1)*L2)*L3)*L4) = 200,000

Back Propagation would help to zero in on lets say the energy (in this case, for energy,  I mean both the horse and jockeys effort being expended which of course can be separate layers). In the 200,000 calculation of the perfect race I have only used a value for the stretch drive energy but actually that doesn't account for the various outputs of energy at different distances on the track. In my understanding of back propagation it is simply the subtraction of L4. You do get the same math result if you sum all except L4 and subtract that sum from the 200,000 but your AI will work better (or maybe more accurately) by reversing out the layers than summing and subtracting from original total.

Combined with your error code and examining the deviations within the layers you can tinker with the weights until you come up with the right combination that predicts the past result of the horse in your racing forms whether won, place or showed.

A little while ago I did have a discussion with forum members here about how they approached a BIAS in terms of their AI algorythms - is a Bias value generated by a math formula or is a Bias just a value you have an inclination to apply. For example, if you were to look at the horses and determine how many hands high, would you consider a bias value for the taller horses? If so, would you use a specific measurement for the bias or might you just look at their record and just consider it's likely a Seabiscuit, small but fast and full of heart and just put your own bias value to it.
                     
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 23, 2021, 07:34:08 pm
.
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 24, 2021, 01:16:51 am
.
Title: Re: Artificial neural network and back propagation
Post by: Dimster on April 24, 2021, 08:11:21 am
Thanks Jazz Man for that clarification on the scope of your neural net. The code I was looking at seemed to fill 20 Hidden layers with 100 inputted values, which implied to me static hidden layers. Each layer of an RNN does need to be dynamic in order for back propagation to work and clearly from your recent post there is a lot more to your hidden layer coding that I was unaware of.

I do understand the difference between an input and a weight, I apologize for the over simplification of my example.

I am curious about how you back out the effects of the application of the Sigmoid function in your back propagation routine?   
Title: Re: Artificial neural network and back propagation
Post by: Dimster on April 27, 2021, 11:36:29 am
Thanks for raising this topic - working with neural nets in layman's language is hard when calculus isn't a strong suit in the arsenal of knowledge. I have spent more time reading and sketching out a neural net than writing up code but that being said I have over 25000 lines of code in my present ai program. I have been working on Back Propagation and in doing so I found out my layers (or Girds as you have them in your program) needed to retain a lot more than just a value. To Back Propagate I need the layers to remember every element which was used to compose the value. Likely over kill. Then I was also looking at the Layers as sequentially triggered like the RNN but once again I found that, in Back Propagation I need to simply turn off a layer, or somehow skip right over it if my error code was strong enough to find only the problematic Layer(s).

But where I have struggled with Back Propagation is that damn Sigmoid Function. Or more to the point, trying to back it out of the my backward passes of the network. At the moment my planning is to back propagate from just before the application of the Sigmoid Function (by the way, I'm using Sigmoid Function to mean the Activation Function). That way I don't need to deal with it. My thinking is .. if the Sigmoid is the ultimate Activation (ie triggers error, triggers learning, triggers the final answer) then why not just avoid it altogether rather than trying to reverse engineer it's effects on the outputted data which feeds it.

If I'm reading your program correctly  HGRID2(x,y) = 1/(1+EXP(-HGRID2(x,y) is the Sigmoid and being backed out by HGRID2E(y,x) = HGRID2E(y,x) * (1-HGRID2E(y,x)). being the inverse. I was looking at a mighty complex formula for the Logit Function which I thought was the way to back out a Sigmoid Function. Another calculus function to "layman it". As soon as I read the Logit Function was the inverse of the Sigmoid and scanned its' formula, I gabbed a few beers and watched some baseball. Is the inverse you are using in fact this Logit Function I have been reading about?

I know the Sigmoid is just a method to smooth out results by incorporating every integral of a curve but it is my Achilles Heal to imagine the math behind the formula.
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 29, 2021, 07:07:09 pm
.
Title: Re: Artificial neural network and back propagation
Post by: Dimster on April 30, 2021, 10:37:33 am
Jazz Man, first let me apologize in advance for long gaps in any future replies. I am involved in a project which will take me out of town for various periods of time. Your project here is a great one to work on as the inputted data is solid, unlike my ai that inputs data whose accuracy needs to be constantly checked. Also, it sometimes takes me awhile to research the calculus formula, conceptulize their mean and then Laymanize the formula into qb64 jargon. Find I often need to tinker with my formulas.

I have been trying to run your program but can't get it go yet. It will not locate and load the supporting files. That seems to be the way I have written the path to those files. I will sort that out .

Is the relationship between your 2 grids TIME? For example is Grid 1 producing the relevant material data before the race is run, and Grid 2 projecting the outcome at the end of the race?  Which I am then imaging that the Sigmoid Grid 1 is refining the weights for input into Grid 2? Actually, I "think" it's more complex than that - it would be more like Neuron (x1,Y1) in Grid 1 is producing relevant and material data for every Neuron (x1...Xn, Y1...Yn) in Grid 2. And is it that everyone of those outputs to Grid 2 are evaluated for activation by the Sigmoid function, or will the Sigmoid function determine if Neuron (X1,Y1) in Grid 1 has a weight of zero and therefore not pass anything?

In my approach to back propagation, the backward adjustment of a Neuron (or the neuron's weight) is made to those neurons which have the largest impact. (similar to the Delta Rule). So Neuron (X1,Y1) in Grid 1 offering zero would not be adjusted. I do recognize  Learning Rates and Error codes do need to be  contended with in backpropagation, but is this basically the approach you are taking?

Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on April 30, 2021, 09:27:17 pm
.
Title: Re: Artificial neural network and back propagation
Post by: Dimster on May 01, 2021, 11:00:10 am
I'm hung up on line 125 "Open "C:\HORSE\TESTDATA\2.TXT" For Input As #21".

I can't seem to find this text file. Are you able to send it to me again? Or, could this be a matter that I'm not looking hard enough and it is within one of the 3 Text files you have already provided.

Title: Re: Artificial neural network and back propagation
Post by: SMcNeill on May 01, 2021, 02:04:37 pm
Do you have a directory "C:\HORSE\..."??

Seems to me that you might need to change that path to point to a folder on your own drive.
Title: Re: Artificial neural network and back propagation
Post by: Dimster on May 01, 2021, 04:10:54 pm
Hi Steve - ya the brain.bas and the TXT files are on my d drive and I have altered the path in Jazz'z code to load from my d drive. They seem to all load ok but when I run the program it can't seem to find the file on line 125. I have been  messing with back slashes and forward slashes and the spaces within the names in the paths with no luck. The only thing that is coming to me is that I'm missing the test data somehow and that could mean I need to search line by line in the files I have, to find if I have done something to the data itself, like failed to download a complete file, getting only part of it.
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on May 02, 2021, 11:30:06 pm
.
Title: Re: Artificial neural network and back propagation
Post by: Dimster on May 04, 2021, 04:42:10 pm
Thanks Jazz - I'm up and running. Just to clarify some of the terminology
- TRAINING COUNT: does this just keep count of the number of times the training data has gone through network or will it count until Predicted = Actual?
- ERROR LEVEL: So is this a measure of the level, the total errors had on the input. For example, 10 input errors contributed to the output being off by 60% of the expected output for that Grid?
- TOTAL NETWORK ERRORS FOR THE CYCLE: This would be an integer counting errors by ( ??? each neuron or each grid). And is a "Cycle = 1 Race Run" which is probably the same as 1 Prediction?
- ACTUAL POSITION: This is the target value being solved for - meaning the output after the network training (the Prediction value) is expect to equal  Actual Position.
                             : Is this value also used in the Error and Learning/Training (ie if Predicted Position does NOT equal Actual Position then ...)

There is also a term "Bias Weights". In discussion with the gang on this forum on Bias and Weights, it was felt that these were interchangeable terms. It does seem to work in many ai algorythms but not sure how you are meaning it here in brain.bas. (ie meaning Bias to the Weight or Bias and/or Weight)

What a fascinating program you have here. I've filled my white board about a dozen times now trying to follow the flow of the data to see where the errors are being caught and which weights are the culprits.

I did want to ask you about the beginning values of each acceptor. So in the ai program I'm writing, I have 50 Events that I'm tracking on a weekly basis. Events are numbered 1 to 50. The beginning values that I use makes each Event equal ( 1/50 = .02) .Then to this beginning value I calculate a weight so the value going into the neuron is .02 * weight. Are your acceptors starting with an equal value (meaning if they are equal at the start of the race means that any acceptor could be in the money) or is it a Calculated Value  (meaning there are a number of factors which form the beginning value before the weight is applied because we already know if the horse was in the money or not).


Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on May 06, 2021, 10:24:19 pm
.
Title: Re: Artificial neural network and back propagation
Post by: The Jazz Man on May 07, 2021, 12:21:01 am
.