What is the best method to scrape an entire website.
The site is powered by a CMS that has stopped working and getting it fixed is expensive and we are able to redevelop the website.
So I would like to just get the entire website as plain html / css / image content and do minor updates to it as needed until the new site comes along.
Any recomendations?
Consider HTTrack. It's a free and easy-to-use offline browser utility.
It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer.