🕷️ Doc-grabber Simple CLI tool for scraping and offline-saving documentation from sites that don't offer downloads.
If a site buries its docs behind JS bloat, tab hell, or lacks an export button — this tool rips it, cleans it, rewires links, and gives you a sane local copy. Think: minimal effort, maximum portability.
⚙️ Features Scrapes full documentation trees (static or semi-dynamic)
Rewrites internal links for offline nav
Strips trash (headers, footers, trackers)
Converts to Obsidian-style link formats (optional)
Flexible output formats: choose .md, .html, or .md + .rtf (format names are case-insensitive)
Outputs clean local folder you can archive or browse
💸 Pricing Free right now while it's in development Once it's stable and polished: $20/year Grab it early if you're cool with mostly-functional jank
🚧 Status Work in progress. It works, but don’t expect it to hold your hand.
You’ll want to:
Run it in a virtualenv or Docker if you're paranoid
Test it on sites you care about
File bugs or PR fixes if it explodes in edge cases
🔗 Related Projects If you're just looking for ready-to-use offline docs, check out: Random-Docs
Built by Ben — because websites shouldn’t vanish and take their documentation with them.