More comments on this issue: Can SQL databases become like nosql databases if they were designed to be that way? Sadly no. There are certain expectations we have when dealing with SQL databases. Logically, a TABLE is something that is “appended to” without delay / conflict, and joins bring back results pretty fast.
However, when a database grows beyond a certain size, we all know the rules change. If we use some vendor specific way to distribute a database (which is a problem unto itself), then we kinda know in the back of our mind that joins are now okay anymore. And there is the problem of that distribution; how does it take place? Which vendor specific gizmo shabang takes care of it?
With nosql databases, the unit of distribution is the object. We apply a mod() function to the key, and we end up with a shard, and that is where the data goes for that key. SImple. We _know_ this takes place for that (any) key. It is out of the box functionality.
The problem with SQL databases is that we (the developers) are expected to bow in front of the relational gods for our daily needs, but then, when things get too big, we need to think “different” and pray differently. Well; nosql dbs say, you think big from day one, and small is taken care of, and distribution is ok.
It doesn’t matter if I have a small database for my application XYZ; if I am hosting this app on the cloud, then I am _already_ part of a BIG database, making use of facilities that were beyond my reach previously.
My point on pedagogical issues are related with this small / big divide: It is easier to learn one set of technology that handles big and small the same way, then to learn a technology that handles small fine (in a complicated way -sql, barf-) and big, in a totally vendor specific way (double barf). Things like Cassandra, BigTable say we handle big and small _same_ way, and on the cloud, well – even backups, failover, etc. etc. is taken care of.
I think any SQL database that tries to copy this, will end up being a nosql database itself.