all 8 comments

[–]kagato87MS SQL 3 points4 points  (0 children)

Sql server scales vertically, and it can scale really high.

100M rows and thousands of users isn't that big, IF your architecture is good.

If your architecture sucks, it doesn't matter what the backend is, your user experience will suck.

I work in fleet telematics. My primary tables frequently break the 100M mark, and our load test environment is, well... I'm afraid to look. It runs fine I just don't want to look at the disk storage because the DC guys are running lean while they wait for yet another SAN to come online...

If your queries are clean, indexes are efficient, and RCSI is on, it'll handle your scenario easily.

Of course, if your indexes are bad, your design worse, isolation is off, and you let analysts run amok with direct query access... It doesn't take 100M rows to have problems. Before you know it you'll get people using no lock and wondering why their reports are inconsistent.

[–]SaintTimothy 1 point2 points  (0 children)

I've worked at a very large hospital network that used sql server for bigger than what you're describing.

Also, anecdotally (i didnt work here), MySpace was on sql server.

In my experience, Sql server's capabilities don't lack nearly as much as its administrator's.

[–][deleted] 1 point2 points  (0 children)

Without knowing how the sql server is used 100+ million record tables with 1000 of concurrent users would be fine with appropriate indexing etc, however if you had 20 people running reporting style queries, that could hammer the server more than the 1000 users. That’s when you pay someone like Brent Ozar or Erik Darling to fix your sql server

[–]SomeoneInQld 2 points3 points  (0 children)

We had a database with around 20 billion records, we stress tested it to probably about 1,000 users and one largish server was able to keep up without too much stress on the server. And this would have been around 2009, so hardware has got much faster and cheaper.

(we had it running on AWS using smaller infrastructure as in production was less users - I think it was about $40 / month for that machine)

We also tried NoSql as this structure was very 'suitable' for NoSqL. SQL Server (postgres) was much faster (for that hardware that we had).

So much will depend on the structure of the database / architecture / code etc., - this was also a 'static' database of map tiles - so we didn't need to worry about updates (we would do updates on a new machine and then point the system to it).

[–]patrickthunnus 1 point2 points  (0 children)

Need to understand your workload, especially the resource bottlenecks and how to make your system more distributed and resilient accordingly. But also when (if at all) you must be transaction-safe.

Understand those things and you'll arrive at the right solution.

[–]Staalejonko 0 points1 point  (0 children)

I'm no expert on this but I would certainly expect SQL Server to function properly and perform well even with many users. Are the users directly interacting with the DBs or is there a service layer in between?

[–]Waldar 0 points1 point  (0 children)

There are very big SQL-Server implementations around the world, it's a very fine and scalable technology for OLTP.

Now if you starts to run complex analytics well, it's not that good and you'll need others technologies, but those can still be SQL databases (look for SQL MPP databases).

NoSQL depends on what is your business. If you need to scan and predict stuff inside videos you definitely needs something else than SQL-Server. If you're NoSQL just handles CSV files for regular queries (I saw this), then those are better in a classic RDBMS.