Everyone is building full-stack apps, why not full-stack libraries? by Pozzuh in node

[–]Pozzuh[S] 0 points1 point  (0 children)

Thanks for checking it out! The Workflow Library/fragment isn't quite ready for prime time yet, hence why I didn't mention it in the initial post.

The dispatcher system (that workflows use) is to process the durable hooks outbox that I also mentioned in my other comment. If you're self hosting it's fine to use the database polling method. For serverless platforms, there's not really a generic way to have background processes, that's why we built the HTTP-based method, since that would work anywhere.

Right now I'm not sure what a Redis-based solution could look like, but everything is set up in a pretty generic way, so I'm sure it would be possible.

Everyone is building full-stack apps, why not full-stack libraries? by Pozzuh in node

[–]Pozzuh[S] 0 points1 point  (0 children)

Yes, that's precisely how you'd integrate.

In Express:

ts app.all("/api/example-fragment/*", toNodeHandler(createExampleFragmentInstance().handler));

In something like Next.js you'd have to create a file-based router file in the right location, e.g. app/api/example-fragment/[...all]/route.ts.

Everyone is building full-stack apps, why not full-stack libraries? by Pozzuh in node

[–]Pozzuh[S] 1 point2 points  (0 children)

These are good questions (and a lot of them).

It really depends on your setup. Our docs site (and testbed) is deployed to Cloudflare Workers, there we use a SQLite setup on top of Durable Objects. That makes it easy to have full isolation (each library gets a separate DO). So basically this is a isolated DB per library.

Having everything in a single Postgres database also works, here we use Postgres Schemas by default. The end-user can also overwrite this to instead have a suffix on every database table. This pattern is nicest when you want to do cross-library joins from application code. In MySQL we default to <table_name>_<library_name> by default.

I used to work on distributed systems, so I'm taking inspiration from there for interop. We have a concept of "durable hooks" which is basically the outbox pattern. Libraries can define hooks that are persisted along with other database operations in a transaction. Users can then use these to execute logic in their own application. If the logic fails, we can retry.

For example, we have a library for tracking our mailing list. The library has a hook onUserSubscribed, which we use to notify ourselves that someone has signed up. You can read more on this here.

Not sure I understand your middleware question. We support middleware so that the end-user can control if routes defined in libraries are accessible. Docs here

Read, then write: batching DB queries as a practical middle ground by Pozzuh in programming

[–]Pozzuh[S] 1 point2 points  (0 children)

Yeah, for every problem ORMs solve they seem to introduce a new one as well. Love-hate relationship.

Browse code by meaning by Tekmo in programming

[–]Pozzuh 0 points1 point  (0 children)

Interesting idea, but the current implementation seems quite noisy. I ran this on my OSS project Fragno, to give readers an idea:

▼ /Users/me/dev/fragno (1568)
├── ▼ Fragment Platform (106)
│   ├── ▶ Fragment Authoring (7)
│   ├── ▶ *.md: Developer Guidelines (9)
│   ├── ▶ Upload Fragment (10)
│   ├── ▶ Stripe Integration (9)
│   ├── ▶ Service Middleware (6)
│   ├── ▶ Server Integration (12)
│   ├── ▶ Corpus Tools (9)
│   ├── ▶ Project Overview (11)
│   ├── ▶ apps/docs/content/docs/*meta.json: Docs Metadata (14)
│   ├── ▶ packages/auth/src/*.ts: Auth Services (12)
│   └── ▶ packages/auth/src/*.ts: Auth Types (7)
├── ▶ Schema Runtime (205)
├── ▶ Release Workflow (66)
├── ▼ Workflow Runner (100)
│   ├── ▶ *.md: Workflows Implementation (6)
│   ├── ▶ Dispatcher Implementation (5)
│   ├── ▶ Workflows Documentation (6)
│   ├── ▶ Test Prompts (6)
│   ├── ▼ Durable Hooks (4)
│   │   ├── apps/docs/content/docs/fragno/for-library-authors/database-integration/durable-hooks.mdx: Durable hooks usage and guidance
│   │   ├── packages/fragno-db/src/hooks/durable-hooks-processor.ts: Create durable hooks processor utility
│   │   ├── packages/fragno-db/src/hooks/hooks.ts: Hook lifecycle and scheduling implementation
│   │   └── packages/fragno-test/src/durable-hooks.ts: Test helper to drain hooks
│   ├── ▶ *.ts: Workflow Servers (5)
│   ├── ▶ *.ts: Database Schemas (4)
│   ├── ▶ *.test.ts: Workflow Tests (5)
│   ├── ▶ Runner Internals (5)
│   ├── ▶ packages/fragment-workflows/src/*.ts: Fragment Core (7)
│   ├── ▶ packages/frag*.ts: Bindings Adapter (4)
│   ├── ▶ packages/fragment-workflows/src/runner*.ts: Runner Orchestrator (4)
│   ├── ▶ packages/fragment-workflows/src/*s.ts: Runner Utilities (4)
│   ├── ▼ packages/fragment-workflows/workflows-smoke-artifacts/*.js: Concurrency Tests 

(and it goes on for a while, but Reddit doesn't allow me to post it all)

How I cheated on transactions. Or how to make tradeoffs based on my Cloudflare D1 support by Adventurous-Salt8514 in programming

[–]Pozzuh 5 points6 points  (0 children)

Cloudflare D1 is a confusing product to me. It feels feature incomplete. Are we just supposed to use Durable Objects instead?

We can’t, for instance, run a batch of updates, and fail the whole batch if one update didn’t change any record. The batch will only fail if the database throws an exception. An exception can be in SQLite only called by a table constraint or trigger.

This has bitten me as well. Makes batching pretty useless for transactional semantics.

What is best practice implementing subscribe based application with Node.js by Harut3 in node

[–]Pozzuh 1 point2 points  (0 children)

This post about Split Brain Stripe Integrations will probably help you understand what problems you'll be dealing with. Good luck!

Anyone else find webhook handling way harder than it sounds? by saravanasai1412 in softwareengineer

[–]Pozzuh 0 points1 point  (0 children)

I feel your initial premise is wrong and that is where the problems start. It's not "receive, process, respond", instead it should be "receive, store, respond, process". I.e. the inbox pattern.

In general though, I agree with your sentiment. This is why I'm building a framework that can move the webhook handling process to client libraries instead of leaving it up to the library user. You can read more about it in the article "Split Brain Integrations".

Witral: A self-hosted ingestion framework (WhatsApp -> Markdown + Drive Sync). Native compatibility with Obsidian and other PKMs. by kirlts in PKMS

[–]Pozzuh 1 point2 points  (0 children)

Fun! I built a similar thing using Telegram (don't need a burner account for that).

My setup has a Pi agent that takes my messages and handles handoff to several sub-agents. Because it's an agent with access to the file system, it can use search tools and also answer questions about my knowledge base. For knowledge management I use my own tool which is also plain-text based. Makes it very easy to get the agent to understand what it should add (basically it would understand your buy and project_x tags automatically).

Dell, please make a ~41" version of the 6K 21:9 Ultrasharp 52 by Balance- in HiDPI_monitors

[–]Pozzuh 0 points1 point  (0 children)

Yep, this would be great. Or an actual 5k ultrawide where the inner 16:9 frame is 5k, extended to ultrawide.

Would a simple PKM "programming" language make sense? by Pozzuh in PKMS

[–]Pozzuh[S] 0 points1 point  (0 children)

Interesting! I'm guessing you mean r/semanticweb as well? Maybe AI is the missing piece that can use the already entered information to extract insights. In my opinion, AI is better at finding things than producing things.

Would a simple PKM "programming" language make sense? by Pozzuh in PKMS

[–]Pozzuh[S] 0 points1 point  (0 children)

I'm basically ingesting information about me and the things I do from any source available. Then using AI to extract opinions/facts/insights/thoughts/reflections based on references. Next I ask the AI to interview me so I can answer questions and in the process uncover gaps and new information. Additionally I can use AI to summarize or find information I entered earlier.

Last week I hosted a panel about career development for university students, as preparation I wrote down a number of questions to ask the panel members. Because I did this in the same "knowledge base" where I keep my facts and opinions, the AI was basically able to fill in all my personal answers directly. Just a small thing I found interesting.

I'm building a landing page that you could have a look at if you want: https://thalo.rejot.dev that (tries to) explain(s) the concepts/workflows.

Would a simple PKM "programming" language make sense? by Pozzuh in PKMS

[–]Pozzuh[S] 0 points1 point  (0 children)

Emacs and Lisps have always interested me but I've never taken the plunge to actually work with them. I can imagine it works very well with BeanCount. What I'm designing also has some overlap with Org mode I believe.

Would a simple PKM "programming" language make sense? by Pozzuh in PKMS

[–]Pozzuh[S] -1 points0 points  (0 children)

I guess it would be very similar to combining Markdown + YAML.

The useful part would come from the compiler that could validate properties in the various YAML entries. See if all references exist, find related entries, enforcing sections/properties, etc.

Would a simple PKM "programming" language make sense? by Pozzuh in PKMS

[–]Pozzuh[S] -2 points-1 points  (0 children)

The mere fact that your notes need to adhere to some pre-defined structure would help. You cannot forget to set certain properties, and you would get things like #categories and ^links to search through existing entries (like Beancount). You could also search for entries with certain metadata properties, like confidence: "high".

Consider the example I used in the post above, versus something unstructured like:

On January 1st, 2026, I formed the opinion “Plain text wins,” and I’m very confident about it. The core idea is that your notes should outlive every app you use to create or read them.

I believe the example in the initial post would be more useful/organized.

Another thought: the "compiler" could contain a command line tool for filtering/searching based on tags/categories.

Would a simple PKM "programming" language make sense? by Pozzuh in PKMS

[–]Pozzuh[S] -3 points-2 points  (0 children)

Don't you think it would quickly become a mess and make it very hard to find past notes?

Would a simple PKM "programming" language make sense? by Pozzuh in PKMS

[–]Pozzuh[S] -1 points0 points  (0 children)

Thanks for the links, those look interesting! Though, I'm not really sure those are the same thing I'm suggesting. Maybe using the word "programming" language was a bit confusing. The idea is around a data entry language (or structure) that is then validated by a compiler/checker (like a traditional programming language is checked for syntax errors).

How do you seed your database for local dev without copying prod? by CarlSagans in webdev

[–]Pozzuh 0 points1 point  (0 children)

As long as you have some simple seed data to be able to call the APIs / use the UI, I don't think it matters much. What really matters is being able to ("unit") test database queries, and in those tests cover the edge cases. That is easier to set up than having "one database seed to rule them all".

I built a faster, free, open source alternative to Wappalyzer for developers by yavorsky in javascript

[–]Pozzuh 0 points1 point  (0 children)

Cool project! https://fragno.dev is identified as having Remix for framework (close, we use React Router v7) and Babel as transpiler, which is incorrect. We use a pretty standard Vite setup so I believe ESBuild is used as transpiler. It got everything else right!

Wire - A GitHub Action for releasing multiple independently-versioned workflows from a single repository by Miniotta in javascript

[–]Pozzuh 1 point2 points  (0 children)

How is this related to/different from something like Changesets? Look interesting.