It hooks browser requests/responses and saves responses as-is under a key based on the request. It can then replay the original response.
Has been done multiple times before, the model is cumbersome / hard to manage, has plenty of annoying edge cases where it completely fails and doesn't work well with streaming audio/video.
True that it completely fails with audio and video streaming (tho really that's probably achievable), but if you want that content there's many good tools. The advantage is that it just saves content to disk and revives it, so the browser seems as if it's online, when it may not be.
It doesn't alter the content in anyway, and doesn't need to compress or rewrite anything in order to fit into a single file or strange archive format, or whatever. I'm sure there are uses for those, but this is not that.
It just saves each resource to disk as it receives it.
Actually, I think the edge cases of this "high fidelity" (but not necessarily "broadband", as it--for now--excludes audio/video streaming), are less than with archive formats where you need to rewrite or otherwise alter the content. But the web is vast, you probably have a different experience! :)
As far as the browser is concerned, I think all the CORS, HTTPS-mandatory stuff just works (at least it used to!).
BTW - I was not aware of this having been done before? Do you have any links? Very interested!
There is some project number that begins with the number 2 and is five digits (refers to a port number) - it is a Python app that uses Chrome debug features to save everything you are doing. I don't know how it would work with Firefox unless the debug functionality there is the same. Memory is hazy on this so I might have some details wrong.