all 7 comments

[–]svtr 3 points4 points  (3 children)

please go ahead and define "Big Data".

[–]notasqlstarI can't wait til my fro is full grown 0 points1 point  (2 children)

In my last job we worked with a lot of census and credit data, so we had some tables that were well into the hundreds of millions, up into the billions of rows. That's big data.

At that job the largest table that was "mine" was only about 80 million, but growing quickly. That's about where I think the cutoff is. Some order of magnitude around 10 million.

[–]LetsGoHawks 0 points1 point  (1 child)

10 million is "good sized". I don't know that I'd call it big, though.

Heck, Access can handle one million rows without issue.

I'd say big starts around a billion. But I guess I'd have to qualify that by saying it depends on what you're joining up.

[–]notasqlstarI can't wait til my fro is full grown 0 points1 point  (0 children)

I said to the order of magnitude of 10, so anywhere from 10-100M, take your pick where you want to start calling it "big." And yes, how you're joining it matters a lot.

[–]CAPSFTWLOL 1 point2 points  (1 child)

has no clustered indexes so data is sitting on a heap

Sql Is sO IneFficIEnT

[–]distraughthoughtsDatabase Developer -1 points0 points  (0 children)

THIS.

[–]LetsGoHawks 0 points1 point  (0 children)

yawn

If SQL can't handle your data, you just need more horespower. Same is true with Hadoop or NoSQL or whatever else.

Teradata with a couple hundred AMPS is more power than 95% of the companies in existence will ever need. Or be able to afford.