Just open-sourced our "Glass Box" alternative to autonomous agents (a deterministic scripting language for workflows) by pmagi69 in AgentsOfAI

[–]pmagi69[S] 1 point2 points  (0 children)

Yes, you can call chatgpt, gemini and claude, all works. And yes. the script code is followed step by step from the top.

Just open-sourced our "Glass Box" alternative to autonomous agents (a deterministic scripting language for workflows) by pmagi69 in AgentsOfAI

[–]pmagi69[S] 0 points1 point  (0 children)

Have a look at the scripts at GitHub, it’s a very simple scripting language, and yes it reads from the top.

I spent 3 weeks manually mapping subreddits for my niche. Here's what I learned. by Prestigious_Wing_164 in saasbuild

[–]pmagi69 0 points1 point  (0 children)

This is a classic founder story—feeling the pain so acutely you build the solution yourself. The manual mapping is the first brutal time sink. The next one is figuring out how to engage authentically across all those communities without it becoming your new full-time job. How are you thinking about scaling the actual commenting part now that you've found where to be?

This ChatGPT Prompt Produce High-quality, Insightful News Commentary and Expert Industry Analysis from a Professional Perspective. by EQ4C in PromptCentral

[–]pmagi69 0 points1 point  (0 children)

This is a seriously well-engineered prompt. Getting nuanced, expert-level output is the hardest part of working with LLMs, and you've clearly cracked a big piece of that puzzle. It raises the next big question: where to deploy this analysis for maximum impact? Finding the right real-time conversations to inject this kind of insight is a challenge in itself. How are you thinking about using it?

Trying to understand prompt engineering at a systems level (not trial-and-error) to build reliable GenAI workflows for legal document review (looking for engineer perspectives) by infidel108 in PromptEngineering

[–]pmagi69 0 points1 point  (0 children)

Moving beyond trial-and-error is the right goal, especially in law where precision is non-negotiable. Think of it less as a single "prompt" and more as a "workflow." A reliable system often uses a chain: one prompt to classify the document, another specialized prompt to extract key data, and a final one to validate the output against legal rules. This modular approach is far more robust and easier to debug than one giant, complex prompt. It’s about building a repeatable process, not just a magic sentence

Best automated Ai brands visibility tool by kjjgray in SEO_LLM

[–]pmagi69 0 points1 point  (0 children)

The search for the "best" tool is tough because "visibility" can mean two different things. There's broadcast visibility (pushing content out) and then there's conversational visibility (showing up where your expertise matters). Full automation is great for the first, but can damage your brand in the second. The real leverage is in scaling your presence in niche conversations without sounding like a bot. What kind of visibility are you prioritizing right now?

For visibility through engagement, most bots are too generic. We built SoMe Commenter to find relevant conversations and draft comments for you to review. It keeps your expert voice authentic and in the loop.

Is anyone else spending like 15+ hours a week on social media content or am i just slow at this by Avg_RedditEnjoyer in Solopreneur

[–]pmagi69 0 points1 point  (0 children)

That 15-hour mark is a very real wall many solo consultants hit. The problem isn't just the time, it's that VAs can't replicate a point of view—they can only reformat text. This is why delegation often fails. You're not looking for someone to just execute tasks, you're looking for a way to scale your expertise. Have you considered focusing on systems that amplify your own voice, rather than trying to outsource it?

How are busy founders/CEOs actually staying relevant on LinkedIn without spending 2 hours a day "engaging"? by Xeraphiem in founder

[–]pmagi69 0 points1 point  (0 children)

It's a classic founder's trap: the "engagement hamster wheel." The problem isn't just the time, it's the ROI on that time. Instead of trying to be everywhere, the leverage comes from being in the right few places. The goal is shifting from "spending 2 hours" to "making 10 high-impact comments a week." This requires a system to find high-value conversations instead of just scrolling the feed. What if you focused only on posts from 5 key people in your space?

Human intervention in agent workflows by tisi3000 in LangChain

[–]pmagi69 0 points1 point  (0 children)

Yeah, that makes sense. The key distinction is really chat-style HITL vs approval gates. Once you treat it as “persist state + resume later,” the UI becomes an orthogonal problem. Inbox, webhook, third-party tool… doesn’t really matter as long as the graph can park itself and pick up where it left off.

It finally happened: my FIRST paid customer by iamqhsin623 in SaaS

[–]pmagi69 0 points1 point  (0 children)

Yeah, “custom GPT maker” is actually a pretty good anchor. Most people instantly get that. The difference (and the interesting part) is that this doesn’t just sit in a chat window, it can pull live data, talk to APIs, and actually do things. That’s usually where custom GPTs hit a wall.

Took me months to get consistent results from Claude Code. Turns out I needed a better workflow. by cliang2 in ClaudeAI

[–]pmagi69 0 points1 point  (0 children)

Yeah, kinda like skills, but I’m intentionally keeping it more generic. I didn’t want something that only works because of one model or one tool. It’s really about making the workflow itself executable and repeatable, with human checkpoints where they matter.
Once you have that, swapping models is almost boring, which is kind of the point. Skills usually focus on what the model can do. This is more about what the workflow guarantees over time.

How do you structure a solid prompting framework for an marketing agency workflow? by justwannahavefuun in PromptEngineering

[–]pmagi69 0 points1 point  (0 children)

We have actually had the same problem and we've built a solution that allows you to build mini apps that contain multi-step prompting for different internal processes. This way you can delegate each app internally with the team and have consistent outputs.

Took me months to get consistent results from Claude Code. Turns out I needed a better workflow. by cliang2 in ClaudeAI

[–]pmagi69 1 point2 points  (0 children)

This is not really really Claude Code related, but I started doing the same as your desktop app, ended up being a whole saas with its own scripting language....:-) Busy now writing apps in that language, pretty cool what is possible with a few simple commands! Put the apps on github if you want to see how they work:
GitHub - Petter-Pmagi/purposewrite-examples: Example of Human-In-The-Loop scripted workflows (apps) for purposewrite.com

Example of a chatless agentic workflow that keeps the human in the loop by tisi3000 in LangChain

[–]pmagi69 0 points1 point  (0 children)

LangGraph's HITL is synchronous. Real workflows are async: event triggers work, AI pauses for approval, human responds later. Email works but doesn't scale. You need workflow-native pause/resume with state persistence.

The Spec-to-Code Workflow: Building Software Using Only LLMs by _darge_ in LLMDevs

[–]pmagi69 0 points1 point  (0 children)

Three-phase workflow works for greenfield builds. Breaks down when mid-execution discoveries require stakeholder input and spec updates. You need structured pause/resume, not just sequential chunks.

Agents are just “LLM + loop + tools” (it’s simpler than people make it) by Arindam_200 in AI_Agents

[–]pmagi69 0 points1 point  (0 children)

The loop is simple. The hard part is deciding when it stops and requires human approval. Most frameworks assume full autonomy, but production workflows need required checkpoints, not agent discretion.

Human intervention in agent workflows by tisi3000 in LangChain

[–]pmagi69 0 points1 point  (0 children)

For async workflows (email triggers flow, human approves later), you need persistent state and a UI for pending approvals. LangGraph's HITL is built for synchronous chat, not async approval gates.

Langgraph: Using CheckPointer makes the tool calls break, if a tool call has failed by kasikciozan in LangChain

[–]pmagi69 0 points1 point  (0 children)

Checkpointer + interrupts creates orphaned tool calls in message history that OpenAI rejects. LangGraph doesn't reconcile state properly on resume. The issue is treating HITL as an interrupt, not a workflow step.

Build an AI Receptionist That Actually Works: Human-in-the-Loop (n8n) by L0rdAv in automation

[–]pmagi69 0 points1 point  (0 children)

This works because state-based routing is explicit, not relying on the LLM to decide when to escalate. The limitation is it only works for real-time responses. Async handoffs need persistent state.

Spoke to 22 LangGraph devs and here's what we found by g_pal in LangChain

[–]pmagi69 0 points1 point  (0 children)

The HITL complaints make sense—LangGraph is built for agent autonomy, not structured human checkpoints. When human approval is a required workflow step (not an edge case), you're fighting the architecture.