(I'll return to update this topic as my attempts continue)
Most of the major contributors and administrators had left a long time ago.
It had been down for a very long time. Erik Möller aka Eloquence left it to die long before this downtime.
I noticed it's been restored, but the database has been locked due to spam concerns.
Those spam concerns exist, of course, because it's MediaWiki install hasn't been updated and basic precautions haven't been taken. Because it's been left to die. Well at least some time was taken to get this version of things running.
There is a shitty mirror, but shitty mirror is shitty so I want to archive infoAnarchy before it's gone for good.
Manual attempts ∞
Special:AllPages generates a list. The dropdown box has to be used per-namespace. Unfortunately it generates a fucking tabbed three-column list.
Special:Export allows one to export pages, but they have to be named individually. And it doesn't accept a tabbed list. Fuck. It also locks up if given too much to do, and brings the website down. It may just be blocking additional connections from my end.
So what I do is get my info from Special:AllPages, copy it into my text editor (Geany), replace
\n, then replace
\nTalk: or whatever namespace I'm on.
Then I can copy-paste it, perhaps only a handful of items at a time, into Special:Export.
bash scripting attempts ∞
I could probably figure out how to script a solution, and I did get some very rough notes done in a bash script, but I looked around and found something more "official" from WikiTeam.
Archive Team / WikiTeam ∞
There seems to be two places to request an archive be made:
- For bug reports, new features requests, notices about new wikis to backup: http://code.google.com/p/wikiteam/issues/list
This script doesn't seem to work. So how are they doing their archiving?
# Not found! site=http://www.infoanarchy.org/en/api.php # Doesn't work! site=http://www.infoanarchy.org/en/index.php path=dump/ if [ ! -d $path ]; then \mkdir --parents $path fi \python dumpgenerator.py \ ` # either api.php or index.php, but I don't know which is best, but index.php will not work. ` \ --api=$site \ ` # Number of seconds to wait between requests. Be nice to the server.. ` \ ` # FIXME - for some fucked-up reason, this doesn't work. Sigh. ` \ --delay=2 \ ` # Download images: ` \ ` # The namespaces you want to exclude, split by commas. ` \ ` # --exnamespaces:0,1,2,3... ` \ --images \ ` # When using --xml, you can filter by namespaces:` \ ` # --namespaces=0,1,2,3... ` \ ` # Location to save the files ` \ --path=$path \ ` # Resume previous incompleted dump, requires --path ` \ --resume \ ` # Full history of all pages ` \ --xml \ ` # Current version would be: ` \ ` # --xml --curonly ` \ ` # `
Their documentation says
--delay:5, which doesn't work. Neither does