I just wanted to add a technical note, which is beyond what you were asking, but still interesting.
Original CRT TVs had extremely fine-grained brightness available. The intensity of photons shot out determine the pixel brightness. There was no particular upper bound for brightness, it was an analog signal.
Once computer CRTs came out, they eventually adopted RGB gammut with 256 shades of red, green, blue to mix together. CRT TVs had many thousands of steps of brightness, while CRT monitors only had 256. The goal was making computer monitors less blurry and more precise. But the severe limitation of only 256 brightness steps from thousands meant if you based it on a linear scale, 256 equal sized steps, the result is almost no dark shades, and too many bright shades. To deal with this perceptual brightness issue, the RGB gammut uses more values in the darker ranges (humans can see small shifts in dark colours easily) and less in the bright end.
The problem with a non-linear RGB is if you draw from 0 to 255 brightness, one pixel row at a time, you end up with a gradient that doesn't evenly transition from dark to medium to light. One hack to get around this is "gamma 2.2" adjustment, which almost matches the real brightness scale. In this way, a gradient would have less of the dark colours, and more mids and bright tones across the gradient.
Most graphics tools, including Photoshop, fail to handle linear brightness correctly. Most apps produce poor quality gradients. Also, when you scale an image down the app combines pixel colour brightnesses together to make the smaller image, but because RGB isn't linear the shrunk image becomes darker than it should actually be.
I'm not offering code to produce a linear gradient on-screen in QB64 -- that's beyond my experience level -- but I can tell you that the gradients being described here do not match an actual linear brightness scale. There's also other issues with RGB, such as green appearing brighter to humans than red, I think by around ~12%.
I can't wait until all computer monitors and video cards support 10bit or higher linear RGB colourspace (instead of 8bit non-linear). 10bit offers 1024 linear brightness steps per RGB channel, instead of 256 non-linear steps. The problem with 8bit RGB is it became so prevelant on PCs that nearly every app out there is using it currently, and it's a lot of work converting over all these apps to a new colourspace that operates quite differently.
With gradients in 8bit RGB, the best gradients you can produce use a linear colourspace that then downsample back to 8bit, matching the closest colour brightness value, and making full use of dithering to help perceptually fill in the gaps in-between the RGB brightness steps. Dithering also removes the visible banding you get with non-linear RGB gradients, which is by far the ugliest aspect of 8bit RGB gradients (especially visible with radial gradients, terrible!).
As an example of an app that correctly handles non-linear RGB colourspace:
ImageWorsenerI always use this app for scaling down large images, it produces much higher quality results than standard image editing apps (Gimp, Photoshop, QB64, etc.). It doesn't darken the shrunk images, properly handles translucency (opacity in RGBA is 8bits as well, but it is a linear 8bits), and maintains much more of the visible hard lines and details when shrinking.
Probably too much information, but it's fascinated me for the last decade or so!