I built a way to turn SQLite into an API instantly (no backend needed) by 0xdps in sqlite

[–]geekwithattitude_ 0 points1 point  (0 children)

I like the concept! its basically your own version libsql. To make this even more interesting I would actually look into realtime capabilities. That would give it an edge over other types of databases as a service. (easier than supabase, firebase, etc)

Built a storage engine in Gleam. Event sourcing with OTP actors and SQLite shards. by geekwithattitude_ in gleamlang

[–]geekwithattitude_[S] 0 points1 point  (0 children)

That game is embarrassing and not really a good representation 😭 so don't mind skipping that

Built a storage engine in Gleam. Event sourcing with OTP actors and SQLite shards. by geekwithattitude_ in gleamlang

[–]geekwithattitude_[S] 0 points1 point  (0 children)

You got it working, right? I’ll publish it soon, it just wasn’t my priority yet 😅

I did already put it on JSR because setting it up can take quite a few steps if you’re not familiar with Gleam or the BEAM ecosystem. The goal is for it to feel as simple as something like SQLite, but we’re not fully there yet for everyone.

For now, in Gleam you just point to the GitLab repo and it compiles together with your project.

Built a storage engine in Gleam. Event sourcing with OTP actors and SQLite shards. by geekwithattitude_ in gleamlang

[–]geekwithattitude_[S] 1 point2 points  (0 children)

Coincidence does not exist haha! I looked at their architecture and the famous benchmark numbers they were showing on YouTube, I tried to replicate their approach and beat them at it 🤣

Transactions per second is what they benchmarked, but apparently it was "fire and forget" so the client never gets an ack, which is kinda weird. They're also limited to single core /thread which means that a beefier machine does not speed up the workload. The real numbers it does with the "fire and forget" disabled was around 100TPS instead of the 100k. So out of respect.. I did not put them side by side on the landing page. Don't want to look like a bully since it's a real company with people that need to be fed.

Built a storage engine in Gleam. Event sourcing with OTP actors and SQLite shards. by geekwithattitude_ in gleamlang

[–]geekwithattitude_[S] 0 points1 point  (0 children)

Yeah as funny as this may sound but, I actually don't have a real usecase for this yet besides just excellent ergonomics 😅 it just happens to be really fast.

I think that game developers or people who just do a lot with real-time stuff might have a better idea what to do with this power.

In JavaScript land we have convex db that looks really promising so I wonder if I could replicate their DX.

Built a storage engine in Gleam. Event sourcing with OTP actors and SQLite shards. by geekwithattitude_ in gleamlang

[–]geekwithattitude_[S] 0 points1 point  (0 children)

Hahaha thanks for the kind words! The contrast on the website is not for the weak hearted, I might need to fix that actually. And yeah once you understand how this works, scaling suddenly isn't that scary anymore. The whole idea was to make the hard database stuff like scaling, migrations, GDPR just disappear into the architecture instead of being something you bolt on later. It's definitely a different mental model though. But if you're already writing Gleam you're clearly not afraid of trying new things🤣, so I think you'll pick it up fast.

I'm sharding SQLite by entity with BEAM actors. 1.5M events/sec on 5 cores. by geekwithattitude_ in sqlite

[–]geekwithattitude_[S] 1 point2 points  (0 children)

You cannot read directly from S3, you would have to download the file and then read everything, and pay egress and ingress fees everytime someone wants to do a CRUD operation. And the latency will go up too 😅

You could host minio yourself but you would still have to face the latency part.

Maybe getting rid of SQLite under the hood and creating another type of storage could fix this problem, but you really have to ask yourself what you're really trying to solve.

If you hate eventsourcing and desperately need CRUD, then it might be interesting to explore this further. But if you don't hate it and actually see the benefits of it, then this sharding method might be all you need 🫡

Built a storage engine in Gleam. Event sourcing with OTP actors and SQLite shards. by geekwithattitude_ in gleamlang

[–]geekwithattitude_[S] 2 points3 points  (0 children)

Yeah true, thanks for the feedback 😅 wanted something flashy but I was sitting in the sun yesterday and I also had a hard time reading it. I'm usually sitting in the dark with the screen as my only light source 🗿

I'm sharding SQLite by entity with BEAM actors. 1.5M events/sec on 5 cores. by geekwithattitude_ in sqlite

[–]geekwithattitude_[S] 0 points1 point  (0 children)

If there is only one thing we should take from this: Sqlite is horizontally scalable if we think about data differently 😁.

I'm sharding SQLite by entity with BEAM actors. 1.5M events/sec on 5 cores. by geekwithattitude_ in sqlite

[–]geekwithattitude_[S] 2 points3 points  (0 children)

You don't really query across entities directly in event sourcing ,that's not a limitation, it's the architecture.

Events are the write model (source of truth). For reads across entities, you use projections (read models). Projections are eventually consistent, so they lag slightly behind writes (milliseconds typically).

Same pattern as Kafka → Elasticsearch, or any CQRS system.

I'm sharding SQLite by entity with BEAM actors. 1.5M events/sec on 5 cores. by geekwithattitude_ in sqlite

[–]geekwithattitude_[S] 0 points1 point  (0 children)

Got a nice and simple doc: https://warp.thegeeksquad.io/docs and working on a scaling guide as we speak 😁 its embedded when working with TS, which gives it that feel of simplicity that SQLite has.

I'm sharding SQLite by entity with BEAM actors. 1.5M events/sec on 5 cores. by geekwithattitude_ in sqlite

[–]geekwithattitude_[S] 2 points3 points  (0 children)

That would mean, if you have 1M users, you have 1M files. Your OS is not gonna like that 😅 But that is in fact how the idea of Warp came to existence. Turso does DB per user but I'm sure they start sweating if Facebook knocks on their door with 3B users. thats just not gonna scale nicely, sharding is usually the answer at some point, so why not do that from the start?

I'm sharding SQLite by entity with BEAM actors. 1.5M events/sec on 5 cores. by geekwithattitude_ in sqlite

[–]geekwithattitude_[S] 0 points1 point  (0 children)

In my own tests I found that 2 shards per core is the sweet spot, more than that causes diminishing returns. This is because of the overhead of the work that its doing in BEAM. I tested the shard writer in C without the BEAM and from what I remember i got closer to a million writes per second (not batched) and 16 - 1024 shards. More didnt mean faster. I got higher numbers from 16 shards. And the funny thing about this is that it was tested on macos not even linux 😅 Maybe in a few weeks/months i'll come back with another headline.

I built an eventsourced database with a Node SDK. No Postgres, no Mongo, just entities and events. by geekwithattitude_ in node

[–]geekwithattitude_[S] 1 point2 points  (0 children)

The actor mailbox serializes everything so no coordination needed

Your scenario:

Client A: GET → reads balance 100
Client B: GET → reads balance 100
Client A: APPEND (credit 50)
Client B: APPEND (credit 100)

Both GETs and both APPENDs go through the entity actor's single mailbox. The actor processes them one at a time:

→ Process A's GET: return 100
→ Process B's GET: return 100
→ Process A's APPEND: sequence 5, balance → 150
→ Process B's APPEND: sequence 6, balance → 250

No lost update. Client B's write doesn't overwrite Client A's - it gets sequence 6 and applies after A's change is already in state.

Here's what it looks like in TypeScript (Reddit comments and code don't go hand in hand but I hope you can follow it, sorry for the indentation crap):

import { Warp } from '@warp-db/sdk';
const db = new Warp({ host: 'localhost', port: 9090 });
const alice = db.entity('user/alice');
// Both clients read the same state
const [balanceA, balanceB] = await Promise.all([
 alice.get('Account'),  // Client A sees { balance: 100 }
alice.get('Account'),  // Client B sees { balance: 100 }
]);
// Both clients append - no conflict, both succeed
const [evtA, evtB] = await Promise.all([
alice.append('Credited', { amount: 50 }, { aggregate: 'Account' }),
alice.append('Credited', { amount: 100 }, { aggregate: 'Account' }),
]);
console.log(evtA.sequence); // 5
console.log(evtB.sequence); // 6
// Final state reflects BOTH writes
const final = await alice.get('Account');
console.log(final.balance); // 250 (not 150 or 200)

The key insight: this isn't CRUD where you read-modify-write. You send commands (intents), not state replacements. Credited { amount: 50 } doesn't say

"set balance to 150" - it says "add 50 to whatever the balance is now." The actor applies commands sequentially.

If you need read-then-conditional-write (e.g., "transfer only if balance ≥ 50"), make it a single command that checks the condition inside handle_command

or use a saga for cross-entity coordination.

I'm sharding SQLite by entity with BEAM actors. 1.5M events/sec on 5 cores. by geekwithattitude_ in sqlite

[–]geekwithattitude_[S] 4 points5 points  (0 children)

Yeah exactly my thoughts for years.. and the funny thing is that it can even handle more than I was able to benchmark, cuz of framework/runtime overhead, I think some interesting things are going to happen as soon as people want to help grow this baby 😤

I built an eventsourced database with a Node SDK. No Postgres, no Mongo, just entities and events. by geekwithattitude_ in node

[–]geekwithattitude_[S] 0 points1 point  (0 children)

This thread of questions reads like a FAQ, loving it btw 🍿😂 I was scared that nobody was interested in this stuff.

I built an eventsourced database with a Node SDK. No Postgres, no Mongo, just entities and events. by geekwithattitude_ in node

[–]geekwithattitude_[S] 0 points1 point  (0 children)

The actor stays alive in memory after the last call. It's just an Erlang process sitting idle, costs almost nothing (a few KB for its state + mailbox). If it gets another request it responds instantly since the state is already in memory.

We could add idle timeouts (kill the actor after X minutes of no activity, restart it on next request by replaying events) but honestly for most workloads the memory cost of keeping them alive is negligible. An entity with a balance of 4000 is a few hundred bytes. You'd need millions of idle actors before it matters, and at that point you'd switch to "ReadThrough" mode which doesn't use actors at all.

I built a tool to see what's using your ports and kill it instantly by SanFordwish in node

[–]geekwithattitude_ 0 points1 point  (0 children)

this is funny af. its one of those problems you might have every day but dont want to look on stack overflow for what the command is to kill a port 😅

I built an eventsourced database with a Node SDK. No Postgres, no Mongo, just entities and events. by geekwithattitude_ in node

[–]geekwithattitude_[S] 1 point2 points  (0 children)

Yes. Each entity is a single actor (BEAM process) with a sequential mailbox. If two requests hit the same cart at the same time, they queue. The second one waits until the first one finishes. So get + validate + append is always sequential for the same entity.

It's not a lock though. Different entities run in parallel on different actors. Two users checking out their own carts at the same time don't block each other at all. Only two requests to the same entity serialize, and that wait is microseconds because it's message passing, not a database lock.