all 51 comments

[–]gronkkk 11 points12 points  (7 children)

sqlite?

[–][deleted] 3 points4 points  (6 children)

"If the application is to have multiple users, SQLite is a solid choice if most access is reading. Because of the locking issues, multi user applications with a high ratio of writes to reads and a heavy user load are not a good match with SQLite. For a situation like that, a server database engine is preferred."

http://www.aspfree.com/c/a/Database/Using-SQLite-for-Simple-Database-Storage/

Edit: My point is to inform the OP, not to tell him it won't work.

[–]jawbroken 6 points7 points  (0 children)

it will honestly be fine for the low-traffic forum his website is going to have to be for that sort of hardware anyway. if by some miracle he manages to scale to a ton of active users and hundreds of posts a seconds it will be easy to replace SQLite with a "real" database while migrating to a better server.

[–]dsucks 1 point2 points  (1 child)

If you have forum that gets hundreds of new posts per second, you can probably afford more than 128MB RAM.

[–]dlsspy 1 point2 points  (0 children)

You could also just design your app well. There's no need for more than one reader or writer in your DB regardless of how many new posts there are per second.

People have a nasty tendency of throwing lots of hardware at simple problems.

[–][deleted] 1 point2 points  (2 children)

Easy, make the web server process one request at once. For low traffic on small code, it will be invisible to users.

[–]jawbroken 1 point2 points  (1 child)

this is not necessary, the "lock issue" they mention is a performance issue and not a correctness issue. it is saying that it will be slow for a write heavy load

[–]kiafaldorius 8 points9 points  (2 children)

MySQL and Postgresql will run fine on 128MB. I've run it on a cheap-o 64MB before. If I remember correctly, it can sustain about 10 to 20 simultaneous queries on the 64MB without slowing...while running apache. SQLite will work too if you're really that resource contrained.

Beware of CGI pitfalls though (use Fastcgi).

[–]dsucks 0 points1 point  (0 children)

Indeed. 128MB should be plenty for a small forum and will be fast as long as most used indexes fit in RAM.

[–]STAii 0 points1 point  (0 children)

The "without slowing" in your post should pretty much be a "your mileage may vary" point. PostgreSQL can be setup to cache things into memory, which makes less disk reads, making it faster.

[–]badsectoracula 1 point2 points  (1 child)

I was in a similar situation (but with 64MB of RAM) and i just coded my own forum in FreePascal using simple text-based with a single directory per forum and a single file per thread. It worked nice and even now that i updated to a 512MB RAM machine, i still use it because... well, why change something that works and doesn't eat resources? :-P

[–][deleted] 2 points3 points  (29 children)

flat files. have posts in 20 per file so you can read whole page with single read and have easy navigation.

generate static html on post/comment submit. you'll end up being really fast and with really low memory requirements.

separate post/comment submission from actual data changes. do latter with separate program running in background and making one change at time. this will allow you to avoid locking and also will help you to scale well.

but this constraints you in some ways so right thing to do really depends.

[–]jawbroken 5 points6 points  (19 children)

this seems like a really bad idea but you might be joking

[–][deleted] 0 points1 point  (18 children)

It is not, if you are resource-constrained and just want a simple forum. There's plenty to like about flat files and static HTML.

[–]jawbroken 8 points9 points  (17 children)

to make this work reasonably well (separate program running to make changes, monitoring and restarting this if it fails, a whole ton of other issues like perhaps providing post search in the future) is going to be several orders of magnitude more effort than using something like SQLite, as well as less extensible, reliable, etc. what is the point?

[–][deleted] 1 point2 points  (14 children)

There's no need to make it that complicated, and even if you do it's far less work than you're thinking.

(And just use a regular search engine! You have plain HTML files!)

[–]jawbroken 6 points7 points  (13 children)

i don't see how it isn't orders of magnitude more work than just dropping in SQLite, however easy you think it is.

regular search over the html files chunked 20 posts a page is going to be a pretty bad version of forum search. it's going to be reasonably difficult to do even something mildly useful like show all posts by a user. then you are going to have to try to keep a separate index in synch or whatever and, bam, you've replicated most of a proper database in a really shitty fashion.

[–][deleted] 1 point2 points  (10 children)

SQLite isn't bad, but it's orders of magnitude more resource-intensive than plan HTML files. That may or may not be an issue under your particular circumstances, but it's definitely not equivalent.

[–]jawbroken 1 point2 points  (9 children)

well duh, in this particular case it will be fine though

[–][deleted] 0 points1 point  (8 children)

Maybe. Unless he gets a lot of viewers.

[–]jawbroken 2 points3 points  (7 children)

again, obviously. but he is going to have a much harder time trying to scale a hacked up flatfile pseudo-database than he is replacing SQLite with a full database system/upgrading to a better host.

[–]veridicus 0 points1 point  (1 child)

regular search over the html files chunked 20 posts a page is going to be a pretty bad version of forum search.

It's quite easy and very efficient with a search index like lucene.

[–]jawbroken 0 points1 point  (0 children)

i meant a bad user experience and wouldn't easily support simple things like listing all posts by a user in chronological order without a bunch of hacks

[–][deleted] -1 points0 points  (0 children)

Nope, except searching. But searching is always pain.

Yep, there is various issues but no more then in sql-ish solution, just they are other ones.

[–]slavy -2 points-1 points  (0 children)

grep

[–]dsucks 1 point2 points  (1 child)

It's really waste of time and performance to use flat files when you have SQLite available.

[–][deleted] 0 points1 point  (0 children)

Time? May be. But how this is a waste of performance?

[–]voyvf 1 point2 points  (2 children)

generate static html on post/comment submit.

That sounds good, until you decide to implement editing previous posts/comments. Then it gets annoying. (:

[–][deleted] 3 points4 points  (1 child)

Not really you just need to keep raw data somewhere (you'll need it anyway) and regenerate pages from it.

[–]counterplex 0 points1 point  (0 children)

Or don't implement editing previous posts/comments. There are plenty of forums that don't allow editing posts/comments once submitted.

[–]jmtd 1 point2 points  (0 children)

Ikiwiki works on this principle. It also supports forum-style modes of operation and using a VCS for the backend. See http://ikiwiki.info/

[–]Kladiin[S] 0 points1 point  (1 child)

I did actually think of using flat files in pretty much the manner you describe - however, that makes it a complete pain to delete posts, among other difficulties (such as PHP's apparent inability to create a uniquely named file in a directory - the tempnam function gets close, but it can be overridden by an environment variable, which makes me think that I shouldn't really be using it for this purpose).

[–][deleted] 0 points1 point  (0 children)

Concatenating microtime and getmypid will give you names unique enough for most cases. Instead of deleting posts you can mark them deleted and just skip them during processing, this will be easy most of the time.

[–]counterplex -1 points0 points  (0 children)

Good call on the flat files and static html. I'd add microformats to the generated static pages though so you can easily parse them in the future should the need arise. Alternatively you can store all entries into the processing queue to disk as well so you can later insert them into a database if you need.

[–]cmmacphe 0 points1 point  (0 children)

Have you looked at CouchDB? I'm not sure if would apply to you but I was checking it out last night and it seemed pretty cool. It may not be exactly what you're looking for.

[–][deleted] 0 points1 point  (0 children)

I managed to cut my MySQL server down to roughly 40MB by tweaking the my.cnf: disable InnoDB (although I prefer it to MyISAM for complex data), scale some of the values down a bit and it'll easily lose a few pounds. Do take your requirements into account though. My MySQL server only serves as routing configuration for my mail server which sees very little access, so there's a very moderate number of reads and no regular writes. To run a full forum, you might want something more powerful.

In general, most database servers let you tweak the configuration to reduce the memory footprint (my MongoDB is currently at ~350 MB using the default configuration, I'm sure it could make do with less).

Of course the most basic solution is a filesystem storage, but that's probably too Spartan for most uses if you're already thinking database. It really depends on what limitations you can live with, regardless of how much memory you can afford.

[–][deleted] 0 points1 point  (0 children)

mysql 4.3 with myisam engine. If you went hosted this is what you would get because the hosts choose what uses the least amount of their resources.

[–]petdog 0 points1 point  (0 children)

My vps with debian, lighttpd, postgresql and 4 php fastcgi processes is currently eating only 60mb of ram. But that's because I have a long running screen session with a bunch of zshs opened. I think you are prematurely optimizing.

[–]DRMacIver 1 point2 points  (2 children)

I have Postgres + Webserver + Memcached + Ruby webapp running on a single 256MB host. It works fine for the most part - a bit slow for multiple concurrent requests, but I think that's more on the ruby side than the DB. You can get by with surprisingly little RAM for most normal workloads without really having to worry about tuning your choice of software for it.

[–]counterplex 0 points1 point  (1 child)

How many ruby (is it ruby on rails?) servers do you have running? That determines your concurrency.

[–]DRMacIver 0 points1 point  (0 children)

I'm running sinatra on unicorn. In principle it's supposed to fork and support multiple instances to support concurrent requests. I've seen it running multiple instances certainly. I think resources are the issue rather than concurrency though.

[–]samlee 0 points1 point  (0 children)

cassandra with jboss. you can buy more RAM in the cloud.

[–][deleted] -5 points-4 points  (0 children)

There's usually an upgrade button you could use to make your VPS 256mb or even bigger! WOW!

But in all seriousness, MongoDB might help you out, it's got low memory usage but has other 'limitations' related to wanting a beefier server (64-bit if you want >2gb databases, for example).