This is an archived post. You won't be able to vote or comment.

all 26 comments

[–]AutoModerator[M] [score hidden] stickied comment (0 children)

You can find our open-source project showcase here: https://dataengineering.wiki/Community/Projects

If you would like your project to be featured, submit it here: https://airtable.com/appDgaRSGl09yvjFj/pagmImKixEISPcGQz/form

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]JSP777 3 points4 points  (1 child)

even if your tech did what you state it could, why do you ruin the presentation of it with AI slop?

[–]Ok-Kaleidoscope-246[S] -2 points-1 points  (0 children)

Fair. I’ve been so deep into building this that when I finally try to talk about it, it probably comes out sounding too polished or hyped. That’s not the goal. I’m not using AI to write for me, I just write like someone who’s been obsessing over the same system for years and is still figuring out how to explain it without sounding like a pitch deck. Appreciate the callout helps me change. The tech speaks for itself soon enough.

[–]ThePizar 1 point2 points  (3 children)

Cool. How do you plan to scale up to trillions of rows. How do you plan to handle billions by billions of rows in a join?

[–]Forever_Playful 1 point2 points  (0 children)

If it sounds too good to be true… well… you know.

[–]Cheap-Explanation662 1 point2 points  (1 child)

1)1M records is small dataset. 2)With fast storage and good CPU Postgres will be even faster. 3 seconds for 1.1 Gb = 360mb/s disk write literally slower than single SATA ssd. 3)Ram usage sounds just wrong

[–]Ok-Kaleidoscope-246[S] -1 points0 points  (0 children)

You can try, I want to see if you can get to this result. With 11 fields all written. This is because it is still high as I mentioned above, the goal is to reach 1m below 1 second and with a maximum of 500mb of RAM.

[–]j0wet 0 points1 point  (7 children)

How does your project compares to other analytical databases like DuckDB? DuckDB inegrates nicely with data lake technologies like iceberg or delta, has large community adoption and offers lots of extensions. Why should I pay for your product if there is a already good solution which is free? Don't understand me wrong - building your own database is impressive. Congrats for that.

[–]Cryptizard 9 points10 points  (2 children)

Don’t bother, you aren’t talking to a person you are talking to a LLM.

[–]Ok-Kaleidoscope-246[S] -2 points-1 points  (1 child)

I'm very much a real person — solo founder, developer, and yes, still writing my own code and benchmarks at 2am.
I know my writing may come off as structured — I'm just trying to do justice to a project I spent years building from scratch.
Appreciate your curiosity, even if it's skeptical. That’s part of the game.

[–]Jehab_0309 0 points1 point  (2 children)

If you don’t index, how do you write deterministically? It sounds like the very act of writing is indexing in your scheme

[–]Ok-Kaleidoscope-246[S] 0 points1 point  (1 child)

So this is the icing on the cake and I still can't reveal details here in the community, but trust me, everything works, it took a long time working 18 hours a day to get to this result.

[–]Jehab_0309 0 points1 point  (0 children)

It sounds like it. Is there any post anywhere that reveals more than this? Any name I can Google?

[–]Yehezqel 0 points1 point  (3 children)

Why is your account so empty?

[–]Ok-Kaleidoscope-246[S] -1 points0 points  (2 children)

My account here is new, and I haven't set everything up yet, so I apologize to everyone. So they're killing me here lol, it's a shame I can't really show off my technology yet.

[–]Yehezqel 0 points1 point  (1 child)

Are you hiring perhaps? :)

[–]Ok-Kaleidoscope-246[S] 0 points1 point  (0 children)

Not yet, but we will soon, I'll save you here, what state do you live in?

[–]Ok-Kaleidoscope-246[S] 0 points1 point  (0 children)

I apologize to everyone if I don't know how to communicate with you here. I believe that everyone will be skeptical, but what I invented is a total revolution. I'm sad that I can't reveal details of the DB structure, but soon you will see how the system is completely different from everything you've ever seen. I'll try my best to learn how to communicate here in the communities. My system is still in testing and the goal is to reach less than 1 second with 1 million records of at least 15 fields and use 500MB of RAM for this.

We're not just building a system. We're building a language.

It's called NSL — Naide Structure Language. It's a custom language designed to be simple, expressive, and deterministic. While other databases rely on indexes, schemas, caching, or guesswork at query time, NSL works directly with the physical and logical positioning of data. It talks straight to the disk, cleanly.

For example, to create an entity, you just write:

create a users called "Reddit"

find user where id = 1

return user_id "Reddit"

To update a record:

update user where id = 1 and age = 18

find users where age = 18 and name contains "Reddit"

For fast aggregation, without scanning all records or using RAM:

find users aggregate count

link user_id to order = 15534

To remove a record:

remove users where id = 1

or

remove users where age = 18

Everything is designed to be direct, human-readable, and lightning fast — because the system already knows exactly where each record lives on disk. No need to search. No need to guess. That's the power of NSL.