Hit Ctrl+S and save it as an HTML file (not MHTML). Then, in the <head> tag, add a <base href="http://downloaded_site's_address.com"> tag. For this webpage, for example, it would be <base href="http://stackoverflow.com">.
This makes sure that all relative links point back to where they're supposed to instead of to the folder you saved the HTML file in, so all of the resources (CSS, images, JavaScript, etc.) load correctly instead of leaving you with just HTML.
See MDN for more details on the <base> tag.
Hit Ctrl+S and save it as an HTML file (not MHTML). Then, in the <head> tag, add a <base href="http://downloaded_site's_address.com"> tag. For this webpage, for example, it would be <base href="http://stackoverflow.com">.
This makes sure that all relative links point back to where they're supposed to instead of to the folder you saved the HTML file in, so all of the resources (CSS, images, JavaScript, etc.) load correctly instead of leaving you with just HTML.
See MDN for more details on the <base> tag.
The HTML, CSS and JavaScript are sent to your computer when you ask for them on a HTTP protocol (for instance, when you enter a url on your browser), therefore, you have these parts and could replicate on your own pc or server. But if the website has a server-side code (databases, some type of authentication, etc), you will not have access to it, and therefore, won't be able to replicate on your own pc/server.
Videos
I built a fairly simple website for a business that showcases its work online in several categories. It uses Coldfusion to generate a lot of sub-pages for each category with the help of file system reading and merging with metadata supplied by a spreadsheet file.
I'm finally fed up with my CF service (not to mention, who programs in CF anymore) and I'm looking to get away from it entirely. In the end this website is static, so I'm looking for a good way to scrape the entire rendered site down for use on an alternative static server. What's a good way to do this nowadays? I've done some searching but "site scraper" seems to mean stuff like pricing metadata now instead of the actual website files.
HTTRACK works like a champ for copying the contents of an entire site. This tool can even grab the pieces needed to make a website with active code content work offline. I am amazed at the stuff it can replicate offline.
This program will do all you require of it.
Happy hunting!
Wget is a classic command-line tool for this kind of task. It comes with most Unix/Linux systems, and you can get it for Windows too. On a Mac, Homebrew is the easiest way to install it (brew install wget).
You'd do something like:
wget -r --no-parent http://example.com/songs/
For more details, see Wget Manual and its examples, or e.g. these:
wget: Download entire websites easy
Wget examples and scripts
If you on Mac, install brew, then use brew to install wget, finally, use wget command to download the page.
wget -p -k -e robots=off -U 'Mozilla/5.0 (X11; U; Linux i686; en-US;rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4' https://www.google.com
On the web page , view its source , and then copy it and paste it in notepad and for others open those css and js links from the website's source and save according to their file-extension.