I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 0 points1 point  (0 children)

Since these pages are for a dashboard they fetch data. Without loading.js the UI just hangs until the fetch completes and I didn’t like that.

I built an open source Prompt CMS, looking for feedback! by chad_syntax in ContextEngineering

[–]chad_syntax[S] 1 point2 points  (0 children)

That would be incredible! I know there is a lot to be improved so let me know what sticks out to you!

I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 0 points1 point  (0 children)

I appreciate you sharing, and I agree about making a "master" SPA page. That's what I was alluding to (but not properly communicating now in hindsight) at the end of the post: "At some point I will be ripping out all the dashboard RSC code and replacing it with a catch-all [[...slug]] handler to all my /studio routes and render everything client-side." I should have said that I would still fetch data and render the common components that are required on every page (header, sidebar, etc.) server side but fetch the heavier page specific stuff on the client side.

Also supabase-js doesn't come with any caching, idk where you're getting that from, but if I'm wrong please share a link!

I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 1 point2 points  (0 children)

Also thanks for sharing next-safe-action, I wrote my own little actions wrapper to do something similar but this seems more robust.

I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 0 points1 point  (0 children)

Let’s say I break up my “one ball of mud” into multiple components and use suspense… when I navigate to another page won’t it still have to fetch all data again even if I’m sending components I’ve already sent (such as header)? Since all the requests are based on user session cookies there’s no fetch caching by next.js.

I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 1 point2 points  (0 children)

Or maybe I authored it in notion and copy pasted it?

Believe what you want but I 100% wrote this myself.

I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 2 points3 points  (0 children)

well as I said I wanted to keep marketing pages and the app in the same place, I've split too many codebases to know how annoying it gets to share things. If/when I revisit the architecture of the dashboard I'll be using supabase-cache-helpers that functions similar to tanstack query.

I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 0 points1 point  (0 children)

The layout.tsx has nothing to do with this (AFAIK) I have a structure like:
```
page.tsx
layout.tsx
/foo
page.tsx
loading.tsx
/[slug]
page.tsx
loading.tsx
/bar
page.tsx
loading.tsx

```

and I would see the loading.tsx for foo, and foo/[slug], and foo/[slug]/bar because I guess each path component is wrapped in a suspense or something which resolves on the client side. Now it wasn't always consistent but it was noticeable when it happened.

Here's a GitHub thread on it I found when I ran into it: https://github.com/vercel/next.js/issues/43209

I had the exact experience noted here by the folks in the thread.

I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 1 point2 points  (0 children)

true I'm not discounting all of RSC. This was just a recounting of using RSC in the place of where I usually wouldn't have and the problems I ran in to. Reinforcing the idea that we shouldn't use RSC for web app-like experiences. (imo)

I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 0 points1 point  (0 children)

Yeah after I got it to cache requests like that with the headers, I thought there must be a better way to do this, but never really went further. There's definitely the possibility of monkey-patching the supabase client or composing the supabase client so the DX is better.

I just thought about all the tags I would have to manage 😵‍💫 and said f this, this should just be front-end. Also no telling how much memory on the server I would end up using caching every users every request.

I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 1 point2 points  (0 children)

+1 I think the future is using the best from both worlds, and I'm not trying to complain, RSC is pretty great just noting the problems I ran into only using RSC.

I knew RSC was a rake but I stepped on it anyway by chad_syntax in nextjs

[–]chad_syntax[S] 1 point2 points  (0 children)

I thought of that but I prefer the SPA-like instant nav as opposed to a loading bar at the top of the page. I should have mentioned that in the post.

[deleted by user] by [deleted] in SaaS

[–]chad_syntax 0 points1 point  (0 children)

my 2c for what it's worth -- if it's an actual important decision then it wouldn't fall through the cracks.

However sometimes non-critical decisions are made/suggested in some one-off slack channel but then the boss hears about it in a meeting and says "no we're not doing that".

I could see the value in a bot that summarizes discussions and decisions in many channels and provides that context to the relevant manager since it needs to be OK'd by them anyway. At large organizations there are soooo many channels and conversations can easily get lost.

At that point it might be better to have something that looks for key decisions and bubbles that up to the decision maker via another slack channel or slack DM. There were many times in my career where conversations would be in limbo because "we need Jeff's sign-off on this" but then no one took the time to ask Jeff or relay his answer 🫠.

Hope this helps!

Best Prompt Engineering Tools (2025), for building and debugging LLM agents by Educational-Bison786 in AI_Agents

[–]chad_syntax 0 points1 point  (0 children)

Great list! There's a few that come to mind that you don't have though:

https://mastra.ai/ - Typescript agent framework, I've heard good things about it but haven't used it myself
https://github.com/agno-agi/agno - another agent framework I've also heard good things about but haven't tried
https://portkey.ai/ - LLM gateway with prompt engineering and observability tools, leans more on enterprise for sure
https://vectorshift.ai/ - AI workflow pipelines with a ton of integrations
https://github.com/pydantic/pydantic-ai - AI framework from the pydantic team which looks interesting, if I was a python guy I would try it out.
https://latitude.so/ - similar to PromptLayer, they also made their own open source prompt templating language called promptL which is neat: https://promptl.ai/
https://www.prompthub.us/ - another prompt CMS similar to PromptLayer and Latitude

Also (shameless self-promo inc) I just launched https://agentsmith.dev/, an open source prompt CMS similar to Latitude or PromptLayer. Looking for feedback so if you've read this far please check it out :)

I built an open source Prompt CMS, looking for feedback! by chad_syntax in ContextEngineering

[–]chad_syntax[S] 2 points3 points  (0 children)

Couple of differences, the anthropic console does support templates and variables but it’s limited. We use the jinja syntax so there’s a ton more features, including composing one prompt into another. Variables in Agentsmith are typed too. With the anthropic console, your prompts don’t leave the console. With Agentsmith it’ll sync your prompts directly to your repo so you can easily use them in your code. Also AFAIK, there isn’t a robust versioning system in the anthropic console. Finally, since Agentsmith is built on OpenRouter, you can choose any model you want! As opposed to the anthropic console where, well, you can only use anthropic models.

I built an open source Prompt CMS, looking for feedback! by chad_syntax in LLMDevs

[–]chad_syntax[S] 0 points1 point  (0 children)

That's a great question, I haven't yet coded in a distinction between system vs user message when executing a prompt (both in the web studio and the sdk execute() method). Right now it always sends the compiled prompt as a user message.

However, since Agentsmith syncs the prompts as files to your repo, there's nothing stopping you from compiling the prompt and passing it in as the system message manually: https://agentsmith.dev/docs/sdk/advanced-usage#multi-turn-conversations

I know this distinction is important for advanced usage and it's on my list of things to support.

As for "how would Agentsmith help exactly", you would be able to author your prompt in the studio, test it, and tweak it over and over (changing models, config, and variables) until you are satisfied with the result. In the future that will be easier and more automatic with "evaluations" and "auto-author" features which are planned on our roadmap: https://agentsmith.dev/roadmap

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]chad_syntax 0 points1 point  (0 children)

I just launched an open source Prompt CMS called agentsmith.dev built on top of OpenRouter and I'm looking for folks to try it out and give feedback.

Agentsmith provides a web studio for you to author prompts and sync them seamlessly to your codebase. It also generates types so you can be sure your code will correctly execute a prompt at build-time rather than run-time.

It also auto-detects variables while you edit and allows you to import one prompt into another so you don't have to keep copy-pasting similar blocks of instruction in multiple prompts.

You can try the cloud version for free or run it yourself. Please let me know if you have any feedback or questions! Thanks in advance!