mirror of
https://github.com/Brandon-Rozek/website.git
synced 2025-10-09 14:31:13 +00:00
Website snapshot
This commit is contained in:
parent
ee0ab66d73
commit
50ec3688a5
281 changed files with 21066 additions and 0 deletions
17
content/blog/archivingsites.md
Normal file
17
content/blog/archivingsites.md
Normal file
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
title: "Archiving Sites"
|
||||
date: 2019-08-02T22:42:16-04:00
|
||||
draft: false
|
||||
---
|
||||
|
||||
I have several old Wordpress sites that are now only alive for archival purposes. I've been trying to migrate off of Wordpress since I don't like having yet another system to manage. Luckily for archival type sites, I don't need the whole backend of Wordpress since I don't actually need any CRUD functionality. (Well I need the R part)
|
||||
|
||||
Solution... wget comes to the [rescue](https://stackoverflow.com/questions/538865/how-do-you-archive-an-entire-website-for-offline-viewing#538878)
|
||||
|
||||
Thanks to user [chuckg](https://stackoverflow.com/users/63193/chuckg), I now know you can run `wget -m -k -K -E https://url/of/web/site` to get a full offline copy of a website.
|
||||
|
||||
[Joel Gillman](https://stackoverflow.com/users/916604/jgillman) expanded out the command to `wget --mirror --convert-links --backup-converted --adjust-extension https://url/of/web/site`
|
||||
|
||||
There are other solutions in that stack overflow post, but something about the simplicity of `wget` appealed to me.
|
||||
|
||||
[Check out this now Wordpress free site of mine!](https://sentenceworthy.com)
|
Loading…
Add table
Add a link
Reference in a new issue