mirror of
https://github.com/Brandon-Rozek/website.git
synced 2024-11-22 00:06:29 -05:00
1.2 KiB
1.2 KiB
title | date | draft | tags | |
---|---|---|---|---|
Archiving Sites | 2019-08-02T22:42:16-04:00 | false |
|
I have several old Wordpress sites lying around that I would like to archive but not maintain anymore. Since I don't intend to create any more content on these sites, we can use tools like wget
to scrape an existing site and provide a somewhat read-only copy of it. I say read-only not because we can't edit it, but because it's not in the original source format of the website.
There have been several tackles to the problem:
- https://stackoverflow.com/questions/538865/how-do-you-archive-an-entire-website-for-offline-viewing#538878
- https://letswp.io/download-an-entire-website-wget-windows/
And ultimately after consulting these resources I've came to the following command:
wget --mirror \
--convert-links \
--adjust-extension \
--no-clobber \
--page-requisites \
https://url/of/web/site
There were other solutions in that stack overflow post, but something about the simplicity of wget
appealed to me.