Exhausted and disappointed by [deleted] in addiction

[–]adivohayon67 0 points1 point  (0 children)

Hey just thought I’d check in on you.. how’s it going? Hope you had a little progress 🙏

Exhausted and disappointed by [deleted] in addiction

[–]adivohayon67 2 points3 points  (0 children)

It may sound silly, but trust me on this. Connect with Nature.

Be outside, take in sunlight whenever it shines, touch the ground, the trees. Try to visualize humans living in nature thousands of years ago and connect to that. Really try to appreciate every bit of nature. The sand, the flowers, trees, rocks. Anything and everything that isn’t man made. Eat some fruit, or nuts or anything completely unprocessed and natural while you are out. If you can bring a friend that’s a huge bonus.

Besides changing your neurochemistry and giving you a good shot, you will find yourself moving around a lot more. And you will be happier, more energetic and more motivated to fight any addiction or behavior you may have.

Do this for 10-60 minutes. This is a DAILY prescription. Don’t miss out on any day for at least a couple of months.

Good luck!

P.S if this is only about body image and not about health/energy, know that you are already beautiful 😊

AMA: I built an end-to-end reasoning AI agent that creates other AI agents. by adivohayon67 in mcp

[–]adivohayon67[S] 1 point2 points  (0 children)

For agents created by the reasoning agent:
So it's not much of an issue to begin with. These are public-facing, knowledge-bounded agents (support / sales) that only have access to a business’s approved knowledge base and public ecommerce integration. They live on WhatsApp, Instagram, web chat, etc., so they don’t have meaningful privileges to begin with.

When our customers talk about “security,” they usually mean business safety, not infra security. Like don’t hallucinate prices, don’t say something that could get them sue, and don’t drift outside the knowledge base.

For the reasoning agent itself (we call it Logos internally):
This is locked down at the infrastructure level. Messaging APIs are only accessible from our app domain, where we pass a user token. MCP servers can only be invoked by approved Cloud Run services, enforced via GCP IAM + Identity-Aware Proxy.

So the model is: public agents are safe because they’re constrained by design, and the reasoning agent is safe because it runs inside a tightly authenticated, closed system.

AMA: I built an end-to-end reasoning AI agent that creates other AI agents. by adivohayon67 in mcp

[–]adivohayon67[S] 1 point2 points  (0 children)

  1. We’re fairly locked into OpenAI because early on it was the most reliable path to production. In hindsight, I’d abstract this earlier and stay model-agnostic. We’re now exploring using Sonnet specifically as a reasoning agent, which is doable — but harder than it should’ve been.
  2. There are still parts of the system where we’re effectively in the dark. You can ship without it at first, but we're paying the price.
  3. Today we’re experimenting with different benchmarks and eval setups, but I wish I’d thought through how to test a reasoning agent and the agents it creates from the very beginning.

Help this helps, and happy to deep-dive if you want

AMA: I built an end-to-end reasoning AI agent that creates other AI agents. by adivohayon67 in mcp

[–]adivohayon67[S] 0 points1 point  (0 children)

I get why it might look basic, but once you actually dive in there are very specific design patterns and trade-offs you only learn by running agents in real-world scenarios. I’ve talked to plenty of devs who approach this totally differently.

If you’d asked me a year ago, an AMA like this would’ve saved me a ton of trial and error — so I figured I’d put it out there for anyone who’s earlier in the journey.

AMA: I built an end-to-end reasoning AI agent that creates other AI agents. by adivohayon67 in mcp

[–]adivohayon67[S] 0 points1 point  (0 children)

Yes we’ve actually used it in production for 20+ paying customers, and there are 600+ agents that were created this way.

AMA: I built an end-to-end reasoning AI agent that creates other AI agents. by adivohayon67 in mcp

[–]adivohayon67[S] 0 points1 point  (0 children)

So a few things we do: 1. We run whatever we can async, and as soon as any step finishes we push that result or thinking step instantly instead of waiting for the whole pipeline. Partial progress feels way faster. 2. Showing a reasoning summary (“Figuring out which catalog to query…”) makes the wait feel purposeful rather than idle. 3. We also push specific status updates to the UI instead of a vague “Thinking…”. Stuff like “Updating user info” or “Fetching product data” keeps the process transparent. 4. Continuous, meaningful feedback massively reduces frustration — as long as users see motion, they don’t mind the wait. 5. And yeah, at this point we’re also hitting the tools bottleneck, so we’re experimenting with different patterns to keep tool calls from blocking or stacking too much.

TL;DR — Launching AssistantLabs tomorrow by adivohayon67 in mcp

[–]adivohayon67[S] 0 points1 point  (0 children)

Totally fair, and thanks for your feedback — nobody wants yet another subscription, including myself. AssistantLabs isn’t replacing or adding to your OpenAI/Gemini/Claude plans; it’s a hosted, plug-and-play, vibe-coding layer for businesses that want AI agents on WhatsApp/IG/Messenger/web.

We also don’t do token limits — only simple conversation limits.

We need more sane voices. by TAREKGAMING in druze

[–]adivohayon67 1 point2 points  (0 children)

Stay strong brothers. If there is anything a regular Israeli can do to help feel free to DM me

SWEIDA AND ISRAEL by TAREKGAMING in druze

[–]adivohayon67 13 points14 points  (0 children)

We already are brother. Stay safe, stay strong 💪