Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's one thing I keep "hearing". The default configuration now is a WriteConcern of 1 - meaning at least the primary has to have written it successfully. You can choose 2 or majority among others.


That won't protect you, you can read the analysis for MongoDB from here: http://jepsen.io/analyses

The last analysis looks good, but you need to note that this is only true with the strongest settings (this means that Mongo will be configured with slowest settings) also this analysis did not include node crashes or restarts.


I read it. From his analysis it seems like the loss of data will rarely happen in the real world if you have a WriteConcern of Majority and you don't have a large latency between nodes.

One of the comments said that theoretically, that could also happen with any system if something happened between the time that it wrote to disk and the time it acknowledged it, you may have extra data.

There are times though that you care more about speed than reliability - ie capturing string data for an IOT device, logging etc.

There are even times that eventual consistentency is good enough. But definitely choose the right tool for the job. I wouldn't trust Mongo for financial data where transactions are a must.

And in my preferred language - C#, if you write your code correctly, you can switch out your Linq provider from Entity Framework to the Mongo driver for instance without any major code rewrite so you aren't stuck with your choice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: