Sounds like you boys need some help. I just tested the recursive
wget command to snag the entire Wiki for my records - and it seems to have worked.
If not using Linux already, first thing you need is the "wget" utility. I downloaded a binary from
https://eternallybored.org/misc/wget/ - and I bet there are a thousand more places to get it.
Next, open a terminal and navigate to a place you'll be able to find later (like the Desktop), and paste the following line in there:
wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains qb64.org --no-parent http://www.qb64.org/wiki/
The switches do this:
--recursive - recursively download all files that are linked from main file,
--no-clobber - do not overwrite files that already exist locally (useful when previous run failed for any reason),
--page-requisites - download all page elements (JS, CSS, ..),
--html-extension - add .html extension to files (if not already there),
--convert-links - fix links in html files to work offline,
--restrict-file-names=windows - rename files to work also in Windows,
--domains example.com - limit downloads to listed domains (links that point to other domains will not be followed),
--no-parent - do not download files form folders below given root folder (folder1/folder/ in our example; files from /folder1 are not going to be transferred).