There is a huge difference in the sophistication between what you describe and what a traditional SQL system like Postgres does.
The problem with just mmapping files is that, to sync, you have to do a bunch of random writes. To commit two transactions, you have to jump two places in the disk and do two writes. You can defer them, but then your transaction commit latency goes up dramatically. So the user is between a rock and a hard place: uncertain durability for extended periods of time, or long commit latency.
Compare that to a system based on a Write-Ahead Log (WAL). The log is 100% sequential (and often preallocated in large chunks), and a transaction is durable if the log is flushed up to some certain point. All transactions go into the same log, so under high concurrency, one flush to disk might commit several transactions. And even if you flush for each transaction, at least you don't have to jump around on disk (and, if using a controller with battery-backed cache to reduce latency, you can make do with a fairly small cache).
The writes to the main data area can be deferred for a long time (30 minutes might be normal), and syncing those is called a checkpoint. You can spread the checkpoint out over time (it's a continuous process, really) so that they don't cause transaction latency spikes. Deferring the writes for so long allows the writes to be scheduled more efficiently without sacrificing durability at all.
On top of that, if you are OK with small windows of time before the commits are durable, postgres allows you to choose on a per-transaction basis not to wait for the WAL flush before returning to the client. If you crash, you are guaranteed to be consistent still, and if a normal transaction comes along, it will of course force a WAL flush. You can control the window of time before postgres will force a WAL flush, where 200 milliseconds might be normal. It can be a small number and still gain you a lot because it's just writing a sequential log, so there's no need to defer it for multiple seconds.
In other words: Mmapping files gives you a choice between very short commit latency and long periods of uncertain durability; or long commit latency. WAL gives you a choice between very short commit latency and short periods of uncertain durability; or short commit latency.
I understand ArangoDB is new. The description sounds interesting, and I like some things about the goals. But I think it's way off the mark to tout the durability as offering a nice trade-off, when a much better method (at least for OLTP) has been known for two decades[1].
[1] Basic idea introduced by ARIES paper in 1992. Couldn't find a link to a PDF, but it is a well-known paper.
The problem with just mmapping files is that, to sync, you have to do a bunch of random writes. To commit two transactions, you have to jump two places in the disk and do two writes. You can defer them, but then your transaction commit latency goes up dramatically. So the user is between a rock and a hard place: uncertain durability for extended periods of time, or long commit latency.
Compare that to a system based on a Write-Ahead Log (WAL). The log is 100% sequential (and often preallocated in large chunks), and a transaction is durable if the log is flushed up to some certain point. All transactions go into the same log, so under high concurrency, one flush to disk might commit several transactions. And even if you flush for each transaction, at least you don't have to jump around on disk (and, if using a controller with battery-backed cache to reduce latency, you can make do with a fairly small cache).
The writes to the main data area can be deferred for a long time (30 minutes might be normal), and syncing those is called a checkpoint. You can spread the checkpoint out over time (it's a continuous process, really) so that they don't cause transaction latency spikes. Deferring the writes for so long allows the writes to be scheduled more efficiently without sacrificing durability at all.
On top of that, if you are OK with small windows of time before the commits are durable, postgres allows you to choose on a per-transaction basis not to wait for the WAL flush before returning to the client. If you crash, you are guaranteed to be consistent still, and if a normal transaction comes along, it will of course force a WAL flush. You can control the window of time before postgres will force a WAL flush, where 200 milliseconds might be normal. It can be a small number and still gain you a lot because it's just writing a sequential log, so there's no need to defer it for multiple seconds.
In other words: Mmapping files gives you a choice between very short commit latency and long periods of uncertain durability; or long commit latency. WAL gives you a choice between very short commit latency and short periods of uncertain durability; or short commit latency.
I understand ArangoDB is new. The description sounds interesting, and I like some things about the goals. But I think it's way off the mark to tout the durability as offering a nice trade-off, when a much better method (at least for OLTP) has been known for two decades[1].
[1] Basic idea introduced by ARIES paper in 1992. Couldn't find a link to a PDF, but it is a well-known paper.