Rick

Rick
Rick

Monday, November 11, 2013

MongoDB FUD

Some brilliant MongoDB analysis algorithm to compare Oracle, Coherence and MongoDB.


Then I googled "Coherence sucks"... 1.7 Million hits while "MongoDB sucks" only has 43,000 hits.

Clearly MongoDB 40x less sucky than Coherence!

If you can't find a blog complaining about some tech, then you are not looking very hard.

MongoDB has some warts. It fixed some. Some are being fixed. It is not right for every application.  Developers have to read the documents before using it but the same can be said for all other solutions. 

I was in a meeting the other day. We wanted to use MySQL to stage some data so QA could see the results for testing. It is high-speed writes app.  A dev wanted to use MongoDB... I was like NOOOO! Then he said MySQL would not scale... I WAS LIKE ARGH! Sometimes, I want to punch myself in the eye when I hear people regurgitate crap they read.

But spilling this FUD does not help either.  There are many apps where MongoDB would work out fine.  There are many where MySQL, Oracle or Cassandra would be a much better choice. Read. Learn. Don't listened to people who have a vested interests (or don't only listen to them).

How many projects I have been on where someone says, why is this listing so slow. How many records are in the table? 10,000,000. Ok what are you sorting on? Last name? I see. What indexes are on this table? BLANK STARE? PUNCH ME IN THE NUTS! F! F! F! Really? Do I go on TSS and say how slow SQL is? No! Knowing the tool is the developer's job! Writing a tool that is fool proof is impossible. 

The same things they say about MongoDB, 20 years ago the MAINFRAME DB guys were saying about Oracle DB. I think Mongo's success has more to do with marketing and ease-of-use than technical merit, but this does not mean it does not have a place or some merit. The same could be said about a lot of companies. Cough! cough!

Here is a point to take home. If you don't fsync on every call, you could lose data. If you do fsync on every call, you will not scale! (For Java devs, outputstream.flush()). If you use a single machine, have a DBA who is monitoring things from time to time, and doing backups, it is probably ok for a lot of apps. If you can't afford downtime or data loss, you need replication. As reliable as disks are, they fail. You can mitigate failure with RAID and hot swap-able backups. All replication has failure edge cases.

There is no magic in the world. If you replicate the data, then you have a better chance of it not getting loss.

If you pick a DB that is geared for high-speed reads and low writes, and try to scale it out to do high-speed writes, you will hurt yourself. At a minimum you will spend a lot more money on hardware. Last I checked MongoDB uses MMap and MMap uses virtual memory, which gets paged in/out by the OS in the most efficient manner possible, which is actually quite good, but.... If you store a giant BTree in memory and then ask the OS to page it in and out of memory to HD in small pieces by nature this does a lot of disk seeking. It also by nature is going to get behind, and the longer it gets behind the more data loss you will have if you use a single machine. This is why the last five versions of MongoDB come with journaling turned on by default because if you are running MongoDB on a single machine, you have to periodically flush or you have a high probability of data loss in an outage due the hysteresis of the eventual consistency model. If you replicate to at least two machines, you can reduce the data loss (also if you use the it-must-exist-on-the-other-box-option of data safety that comes with MongoDB).

Data safety is an application requirement. It has a cost.  Less = faster. More = less fast (or more expensive to go fast). You have to balance that cost with how safe you want the data. Can you afford a few virtual tractor sales getting lost on Farmville then dial back the data safety a bit? Security is a continuum like data safety. It is more secure to keep your servers in a closet with no Internet access. See.... Clearly since it is more secure to do it this way... all apps must be done with servers locked in a closet with no Internet. Of course not. My point is that the each app has a different tolerance for security as it does data safety and availability. There is not a one size fits all solution because each problem has its own requirements. Hell banks don't use distributed two phase commits not because they don't want consistency but because they need some level of performance and it is easier to manually correct the 1 in 10,000,000 times something goes wrong than buy and engineer the hardware to handle a two phase commit at the scale they need. Engineering tradeoffs. Period.

Disks are still slow at seeking after all of these years. Disks are really fast at reading / writing long sequences (300MB per second, higher if you use RAID 0 20 disk machine) but bad at seeking around the disk for small bits. This is why Cassandra and Couchbase are so good at high-speed writes, and MongoDB not so great (yet, but I think of 100,000,000 reasons why it will get better). You can speed up MongoDB write by sharding, but that is not a free lunch (got to pick a good shard key, setup is a lot harder, mongos, auto-shard, loss of operational predictability, etc.). MySQL and Oracle can be setup to be very fast at high-speed writes.

LevelDB is good at high-speed writes. It is more or less a write only database that uses bloom filters (is my data in the gak), perfectly balanced BTrees written into gak blocks and the equiv. of GC for perfectly balanced long sequences of gak (gak gets compressed and filtered into other longer sequences of gak, it never gets updated, just consolidated and deleted/merged into large sequences of gak, this allow it to prune / consolidate / merge quickly -- same as google tablets). LevelDB is good at high speed writes, but mediocre at high-speed reads (not bad, but not great).

How can you mitigate MongoDBs disk seeking weakness? Use SSD would help so would shards. SSD have no issues seeking. Shards spread the writes across many machines. Still though are you sure MongoDB is right for this app? Have you compared / contrasted the way it works to CouchBase, Cassandra, MySQL, Oracle, LevelDB? Do you know how much data you are going to handle? Do you know how many write per second and at what size? Have you benchmarked it? Have you done load testing? When you deploy it, are you monitoring it? How far behind is the eventual consistency?

If it gets too far behind, what is your mitigation plan? At some level of writes, all DBs become hard to deal with MySQL, Oracle, NoSQL, NewSQL, etc. There is no free lunch and magic web scale sauce... there exists physical limitation of hardware. Physics will eventually stop you. Oracle DB can and does handle petabytes of data so NoSQL fan boys step off. You don't have to use a fully normalized RDBMS, you can skip some normalization, and setup really high internal caching, this would get you close to NoSQL ease of use with some very tried and true products. MySQL is actually pretty damn amazing tech as is Oracle DB. MongoDB has its place too. :) LevelDB, MariaDB, Drizzle, etc. they don't get enough attention. MongoDB and Redis get all of the NoSQL glory. They are not the only two NoSQL solutions by a long shot.


No comments:

Post a Comment

Kafka and Cassandra support, training for AWS EC2 Cassandra 3.0 Training