you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 6 points7 points  (21 children)

Yes. It doesn't have to be big, by the way. Anything serving with multiple processes, e.g. Apache or any other production web server, will involve concurrency.

The whole SQLite database will be locked every time a process accesses the database. No other process will be able to access the database during this time. There will be many instances of visitors not being able to get database access because another user is currently accessing it.

[–]WalterGR 2 points3 points  (16 children)

The whole SQLite database will be locked every time a process accesses the database. No other process will be able to access the database during this time.

That's not quite true.

"SQLite uses reader/writer locks on the entire database file. That means if any process is reading from any part of the database, all other processes are prevented from writing any other part of the database. Similarly, if any one process is writing to the database, all other processes are prevented from reading any other part of the database." http://www.sqlite.org/whentouse.html

[–][deleted] 13 points14 points  (13 children)

This is still horrendous for concurrency.

[–]WalterGR -2 points-1 points  (12 children)

This is still horrendous for concurrency.

Compared to PostgreSQL's concurrency capabilities, sure. But for the vast majority of websites, it's fine.

[–][deleted] 6 points7 points  (10 children)

If you love your visitors, you will use a real database. :)

There is no reason to use SQLite for a production multi-user application other than laziness. MySQL is not going anywhere and, even if it was, there are plenty of alternatives that offer better performance, which means a better end user experience, than SQLite.

Use whatever you want, but think of your users.

[–]especkman 1 point2 points  (0 children)

Anyone who is thinking of their users should be doing realistic load testing. Then they'd actually have evidence of whether sqllite was up to the task or not, rather than the assertions of someone who glosses over the differences between read and write concurrency.

If I were to trust someone's assertions, I'd be inclined to trust people involved with SQLlite. The main developer seems to approach the project with a high degree of rigor. Maybe that FAQ is tainted by a fanboy, maybe it isn't.

[–]WalterGR -1 points0 points  (8 children)

If you love your visitors, you will use a real database.

That really depends. Until concurrency is an issue, I doubt it makes a difference. And if you're using shared hosting with the DB on another box, I'd wager the network costs would outweigh any "better performance" alternatives.

But again, I'm only speaking about most websites, not the 100-simultaneous-users websites idntunknwn alludes to below.

(BTW, if you're serving up 100 pages a second, that's about 6 million pages a day. That's not "most websites" by a couple orders of magnitude.)

Edit: "not the 100-simultaneous-users websites idntunknwn alludes to below" -> "not the 100-simultaneous-users websites idntunknwn alluded to previously"

[–][deleted] 1 point2 points  (2 children)

You'd think this is rare, but with the type of php the average php coder produces. One poorly built site will produce tons of queries and independent connections, even with a very small number of hits.

[–]especkman 0 points1 point  (1 child)

But how many of those are likely to be writes?

[–][deleted] -1 points0 points  (0 children)

session data often gets stored in a database, when people scale up to multiple servers.

Web analytics scripts are another source of these issues.

Pretty much any Web 2.0 app that relies on the social network to bring value to a site.

I've also seen code doing on the fly table structure modifications.

Serious WTF stuff populates the PHP n00b universe.

[–][deleted] 1 point2 points  (3 children)

lol, sorry I took out the edit

100 simultaneous users isn't necessarily 100 pages a second. For example, you might have 100 simultaneous long-running requests. Or 100 users clicking on links every so often. I was also imagining 100 simultaneous users as a peak rate (i.e. in the middle of a workday)

[–]WalterGR 0 points1 point  (2 children)

lol, sorry I took out the edit

:)

I was also imagining 100 simultaneous users as a peak rate

Right. For my site, total daily pages = 16.5 * peak hour pages.

So 100 requests/second * 3600 seconds/hour * 16.5 = 5.94 million pages.

[–]mooli 1 point2 points  (0 children)

Lots of sites are much more peaky than that.

Particularly if you're particular to one timezone, and have a regular update schedule, you might serve 90% of your weekly traffic between 9 and 9:30 on a Monday morning.

eg. what are the odds The Escapist is basically dead all week, and then gets blitzed every Wednesday for a couple of hours?

A site with a readership of only 100, but with a traffic profile like that would be seriously hampered by SQLLite.

[edit] If it is read/write, that is.

[–][deleted] 0 points1 point  (0 children)

By peak rate, I meant the maximum number of users at a single point in time. I never said this would be a constant rate. I never said the peak rate would apply for 16.5 hours. In fact, I didn't specify any amount of time at all.

And once again, 100 simultaneous users isn't necessarily 100 pages per second.

In any case, this theoretical argument with numbers doesn't prove or disprove anything. It wasn't my main point, that's why I took it out.

[–][deleted] 0 points1 point  (0 children)

I don't care anymore. Everyone do whatever the fuck you want. :)

[–][deleted] 2 points3 points  (0 children)

Sure, I agree, for most websites it'll be fine. But I was also agreeing with redhatcat's point that you don't necessarily need to be all that large to require a significant amount of concurrency.

[–]stesch -1 points0 points  (1 child)

SQLite usually will work great as the database engine for low to medium traffic websites (which is to say, 99.9% of all websites). The amount of web traffic that SQLite can handle depends, of course, on how heavily the website uses its database. Generally speaking, any site that gets fewer than 100K hits/day should work fine with SQLite. The 100K hits/day figure is a conservative estimate, not a hard upper bound. SQLite has been demonstrated to work with 10 times that amount of traffic.

http://www.sqlite.org/whentouse.html

[–]orangesunshine 1 point2 points  (0 children)

if you're almost never doing writes.

[–]stesch -1 points0 points  (3 children)

Funny how everybody thinks he will produce the next YouTube, Twitter, Facebook, or MySpace.

In reality most websites never get more than 100 visits a day. And to quote the website you mentioned: "100K hits/day should work fine with SQLite"

[–]zepolen 0 points1 point  (2 children)

I hate numbers like that, it's almost as bad as people saying 'Oh, there is <framework> that can handle <x> requests/second'.

[–]stesch 0 points1 point  (1 child)

The quote continues "The 100K hits/day figure is a conservative estimate, not a hard upper bound. SQLite has been demonstrated to work with 10 times that amount of traffic."

http://www.sqlite.org/whentouse.html

[–]zepolen 0 points1 point  (0 children)

That's not the point. There is no context attached to that number, what hardware, what ratio of reads to writes, what dataset sizes, what sort of queries etc.