In the distant past I was the lone engineer of Common Crawl almost a decade ago. Common Crawl heavily leverages the WARC format.
My favorite capability of the WARC format borrows from the fact that most compression formats can be written to allow random access. Compression formats such as `gzip` and `zstandard` allow multiple compressed streams to be stuck together and act during decompression as if it's one contiguous file.
Hence you can create multiple compressions and literally stick them together:
For files composed of only a textual / clearly delimited format that means you can fairly trivially leap to a different offset assuming each of the inputs is compressed individually. You lose out on some amount of compression but random lookup seems a fairly reasonable tradeoff.
Common Crawl was able to use this to allow entirely random lookups into web crawl datasets dozens / hundreds of terabytes in size without any change in file format for example and utilizing Amazon S3's support for HTTP Range requests[1].
Trading compression for random lookup is even more forgiving if you create a separate compression dictionary tailored toward your dataset. For web crawling you'd likely get you the majority of the compression gains back unless pages from the same website are sequentially written which is unlikely in most situations. The website's shared template/s would result in very high compression gains across files which you'd lose by allowing random lookup but most crawlers don't don't operate sequentially so local compression gains are likely smaller than larger.
Isn't this a benefit you'd trivially get just by using .zip? I pull individual files out of large .zip archives in S3 using HTTP range requests; works exactly as you'd expect. You know the zip header is at the end of the file, and the header tells you the offset and length of the compressed entry data so you can request the range. Two requests if you've never seen the .zip before, one if you've got the zip header cached.
As mentioned it's trivial across the spread of compression algorithms supporting this type of behaviour (`gzip`, `zstandard`, `zip`, ...), the header in `zip` making it even more convenient as you note!
WARC as a format essentially states that unless you have good reason "record at a time" compression is the preferred[1].
The mixture of "technically possible" and "part of spec" is what makes it so useful - any generic WARC tool can support random access, there are explicit fields to index over (URL), and even non-conforming WARC files can be easily rewritten to add such a capability.
It occurs to me that you could stick a few bytes of header in the beginning of the ZIP file, to tell you the exact location of the header at the end of it, thus avoiding multiple lookups. It would even still be ZIP-compatible.
Definitely. I take an alternative but similar approach: since I control the zip files, I can guarantee that the header is always within the last N kilobytes of the zip file (configurable value of N). I spend a HEAD request to get the length of the zip file and then walk backwards by N kilobytes. You would request the few bytes at the beginning instead of using that request to get the file length.
If you're creating the zips in the first place, you can just check and see how big the headers are when you create them. If you happen to get N wrong, you can request another chunk, but obviously it's nice to avoid multiple requests to get the header. For my use case, the number of files is small and relatively consistent between zips so a generous value of 64KB ended up working great.
If anyone's interested in web crawling technology, check out Heretrix [1], been around since 2004 and while not the most performant it has incorporated many responsible disciplines in the design and as this article pointed out, WARC format.
Second that. Anyone interested in studying web crawler tech should definitely take a look at Heritrix. I had used it extensively when it was still in 2.x. They got so many things right about writing well-behaved and fault tolerant crawlers. Plus the code is very modular, and extensible, if you know some Java. The other popular option then was Apache Nutch, but it had too much hadoop baggage.
Hadoop is a bit of a nuisance in this general corner of Java. It's got a propensity for integrating deeply with cluster adjacent technology in a way that is very difficult to root out.
Kind of a pity since it has the effect of making things that could be very easy, such as reading and writing parquet files, much harder than it needs to be in Java.
Its performance shines in larger scales. It’s designed for politeness to individual domains, but scales out well for very wide crawls of many domains. It’s pretty much endlessly configurable, but not the easiest to learn.
I wish Apple would’ve open sourced the .webarchive format.
Nothing beats the user experience: Cmd-S in Safari and select “Web Archive”. It’s downloaded in permanent copy, indexed by Spotlight and accessible on all your devices.
I use it for collecting recipes around the web. However, I’m a bit concerned about data longevity. I’ve tried other more open formats (loved Singlefile) but none have the UX and support that this has. It’s so simple (as it should be).
If you are working a lot with those parquet files it might be worth looking at Apache Arrow which is an in-memory/wire format for working with columnar data. It has a lot of good support for parquet from what I gather and is really focused on allowing efficient wrangling of data. Zero-copy and all that.
Note: no affiliation, just in a deep rabbit hole on data
The WARC (Web ARChive) file format offers a convention for concatenating multiple
resource records (data objects), each consisting of a set of simple text headers
and an arbitrary data block into one long file. The WARC format is an extension
of the ARC file format (ARC) that has
traditionally been used to store “web crawls” as sequences of content blocks
harvested from the World Wide Web. Each capture in an ARC file is preceded by a
one-line header that very briefly describes the harvested content and its length.
This is directly followed by the retrieval protocol response messages and content.
The original ARC format file has been used by the Internet Archive (IA) since 1996
for managing billions of objects, and by several national libraries.
My favorite capability of the WARC format borrows from the fact that most compression formats can be written to allow random access. Compression formats such as `gzip` and `zstandard` allow multiple compressed streams to be stuck together and act during decompression as if it's one contiguous file.
Hence you can create multiple compressions and literally stick them together:
For files composed of only a textual / clearly delimited format that means you can fairly trivially leap to a different offset assuming each of the inputs is compressed individually. You lose out on some amount of compression but random lookup seems a fairly reasonable tradeoff. Common Crawl was able to use this to allow entirely random lookups into web crawl datasets dozens / hundreds of terabytes in size without any change in file format for example and utilizing Amazon S3's support for HTTP Range requests[1].Trading compression for random lookup is even more forgiving if you create a separate compression dictionary tailored toward your dataset. For web crawling you'd likely get you the majority of the compression gains back unless pages from the same website are sequentially written which is unlikely in most situations. The website's shared template/s would result in very high compression gains across files which you'd lose by allowing random lookup but most crawlers don't don't operate sequentially so local compression gains are likely smaller than larger.
[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requ...