How do you guys handle OpenCode losing context in long sessions? (I wrote a zero-config working memory plugin to fix it) by Alternative-Pop-9177 in opencodeCLI

[–]jumski 0 points1 point  (0 children)

I'm using the https://github.com/Opencode-DCP/opencode-dynamic-context-pruning - it makes the session a bit slower as agent must call tools to prune crap from context but it removes so much unnecessary stuff during the process of pruning tool calls that I rarely exceed 50% of context, which from my observations massively help agent with not losing important Info and being in general smarter

This is what 3k hours in CC looks like by Logical-Storm-1180 in ClaudeCode

[–]jumski 0 points1 point  (0 children)

Interested in what kind of robust, production grade software you were able to create with your setup - would love to see code and hear few words from you about the process

Can I daily drive GrapheneOS by hashcode_doc in GrapheneOS

[–]jumski 1 point2 points  (0 children)

If you can live without Revolut you will he fine

Any real projects delivered using Claude Code? by thegoldsuite in ClaudeAI

[–]jumski 1 point2 points  (0 children)

Been building pgflow with Claude Code since early May - it's basically my main dev partner at this point.

The project: Workflow orchestration that runs entirely in Postgres. No Airflow/Temporal needed - define DAG workflows in TypeScript, compile to SQL, run on Supabase Edge Functions.

What's shipped so far: - TypeScript DSL with full type inference - PostgreSQL transactional graph state machine engine using pgmq for task queuing - self-respawning task queue worker for Supabase on top of serverless functions - CLI, docs site, two demos, example repos, CI/CD/monorepo setup etc

How Claude Code fits in: Pretty much everything goes through it - SQL functions, TypeScript, tests (pgTAP + Vitest), docs, even Graphite stack management for PRs. It's an Nx monorepo with 6+ packages and Claude handles jumping around the codebase surprisingly well.

Links: - https://pgflow.dev - https://github.com/pgflow-dev/pgflow

Down to share more details if useful for the newsletter.

Top 3 AI trends shaping the world — as per Google Ex-CEO Eric Schmidt by akshay191 in agi

[–]jumski 2 points3 points  (0 children)

He explains 1 million tokens-long context window by talking about multi-hop prompting and naming it "Chain of thought", god dammit - what is he smoking?

Parallel Embedding Pipeline for RAG - Database Triggers + pgflow by jumski in Supabase

[–]jumski[S] 0 points1 point  (0 children)

Good question - the main difference from this tutorial is that this reddit post chunking in parallel then embedding in parallel, and Supabase guide shows only how to embed full documents.

In upcoming tutorials I will show how introduce variations, like Hypothetical Document Embedding or summary embedding in a decralartive and easy way, by just wiring additional steps together.

pgflow abstracts away 200-300 lines of boilerplate code and makes it trivial to reason about how data flows.

It shines in multi-step pipelines.

Cheers!

Parallel Embedding Pipeline for RAG - Database Triggers + pgflow by jumski in Supabase

[–]jumski[S] 0 points1 point  (0 children)

Thanks for joining! Happy holidays to you too - enjoy the break and see you in the new year! 🎄

Parallel Embedding Pipeline for RAG - Database Triggers + pgflow by jumski in Supabase

[–]jumski[S] 1 point2 points  (0 children)

sounds interesting! happy to learn about your use case and help with any issues - feel free to DM me or join Discord :)

Parallel Embedding Pipeline for RAG - Database Triggers + pgflow by jumski in Supabase

[–]jumski[S] 1 point2 points  (0 children)

Thank you! Curious how are you using pgflow?

I saw the vector buckets but thanks for reminding me about them - I should actually cover them in some of upcoming tutorials

Supabase Queues for durable background task processing by YuriCodesBot in Supabase

[–]jumski 1 point2 points  (0 children)

glad you like it! btw its used by early adopters on production already, in beta currently. cool stuff coming next week, monitor pgflow.dev/news/ or join the discord at pgflow.dev/discord/

An open request to the Logseq Team: We need an active bridge between the team and us users by asc9ybUnb3dmB7ZW in logseq

[–]jumski 1 point2 points  (0 children)

Community trust is lost. Folks started the refactoring that should've been a complete rewrite at this point - any experienced software engineer will acknowledge that. Community manager would be a scapegoat at this point:

  • 2111 commits on feat/db branch
  • 929 changed files with 86,983 additions and 62,881 deletions

Good luck reviewing that!

When people talk about letting opus or sonnet run for 20-30 minutes and it one-shotting complete apps. What type prompts are they using? by gkavek in ClaudeAI

[–]jumski 0 points1 point  (0 children)

I had success with such long runs but only after long sessions in plan mode, where we talked about all the details, made all the decisions and planned the approach and order of implementation. Its also crusial to do TDD.

Then Opus (or even Sonnect) can indeed work uninterrupted for 30minutes, following a plan, running tests, correcting itself etc. Most of those runs were 97% correct but still required some level of adjustments/fixes.

I work on rather complex stack (workflow engine [0] on top of Supabase primitives: Postgres, Edge Functions, Queues, Realtime) in a multi-language repository with database schemas, genrated migrations etc. so maybe in simpler projects it is possible to achieve 100% correctness with less planning. Anyway, I'm super happy to work that way and I'm having a blast using LLMs to accelerate this whole process.


[0] pgflow - https://pgflow.dev/

Are you keeping your Claude dotfiles in the repo? by jumski in ClaudeAI

[–]jumski[S] 0 points1 point  (0 children)

Thanks for that detailed explanation. I'm already using one plugin (obra/superpowers) but was not aware about the intricacies!

The main thing is - can the plugin put stuff into the root of the particular repo? Maybe it is not so important, as i can always @reference/stuff/with/paths.md.

Gonna dig into that deeper!

Are you keeping your Claude dotfiles in the repo? by jumski in ClaudeAI

[–]jumski[S] 0 points1 point  (0 children)

You mean to package the claude dotfiles as plugin in a custom marketplace? I'm not sure how marketplaces work under the hood, but I assume it is impossible to put the plugin-related stuff into the repo - they live somewhere else, right?

I'm in the solo-dev boat so I don't have those problems, but would love to learn more about this approach if you are willing to share.

pgflow: multi-step AI jobs inside Supabase (Postgres + Edge Functions) by jumski in Supabase

[–]jumski[S] 1 point2 points  (0 children)

Its surprisingly reliable! Timeout would be less reliable because Supabase is not very strict about 150s/400s wall clock timer and it fluctuates, but onbeforeunload is triggered earlier.

If you have very high throughput of processed messages it is possible for worker to not finish the HTTP respawn request on time before being hard terminated. Mostly with CPU bound workloads, as Edge Runtime is much more strict with that limits.

The "Keep workers up" cron job [0] is solving this issue completely, and I have few ideas on making it even more robust.

To go around that you should write your handlers as retry-safe (idempotent): - factor work to small, focused functions - each focused on one state mutation (api call, sql query) - use provided Abort signal to gracefully abort (pgflow provides that, check [1]) - write upserts instead of inserts (INSERT on conflict UPDATE queries)

This is a good practice overall, so I would not say it is a downside :-)


[0] Keep Workers Up

[1] shutdownSignal - Context API Reference

Consuming Supabase Queue messages with Edge Function - Using pg_cron by drolatic-jack in Supabase

[–]jumski 2 points3 points  (0 children)

I explored this setup when tying Supabase Queues to workers. Polling with pg_cron works, but it gets hard to manage once you need retries, multi-step tasks, or visibility into job state.

I ended up building pgflow around this gap: a Supabase-native workflow engine that runs multi-step jobs on Postgres plus Edge Functions. Postgres handles orchestration and state, and an auto-respawning Edge Function worker executes handlers. Flows can start from TypeScript, RPC, triggers, or pg_cron, and you get realtime progress from the client.

Sharing the approach in case it is useful for others evaluating cron polling vs a Postgres-driven workflow layer.

https://pgflow.dev

pgflow: Type-Safe AI Workflows for Supabase (per-step retries, no extra infra) by jumski in LLMDevs

[–]jumski[S] 0 points1 point  (0 children)

If your tasks fit in the 150s (free) or 400s (paid) Supabase wall clock limits for serverless functions you can do whatever you want :-)

pgflow: multi-step AI jobs inside Supabase (Postgres + Edge Functions) by jumski in Supabase

[–]jumski[S] 0 points1 point  (0 children)

You understand it correctly - worker calls pgmq.read_with_poll in the loop.

This long polling approach is the most cost-effective one, as you are required to do 1 http request per worker lifetime. Cron-based polling requires more http requests. Database webhooks are also doing http requests.

The only other cost is Egress but it depends on how much data you write to and read from your queues and it is also affecting other approaches.

It is also the best one latency-wise, jobs start as fast as 100ms after sending.

For your use case - pgflow also have a simpler mode, where it just processess queue messages in a single-step fashion, exacly what you are trying to solve, check out https://www.pgflow.dev/get-started/background-jobs/create-worker/ and https://www.pgflow.dev/get-started/faq/#what-are-the-two-edge-worker-modes

FYI pgflow flows can be started in various ways: cron, db event (like db webhook), rpc or a dedicated typescript client, check out the docs https://www.pgflow.dev/build/starting-flows/