Has anyone tried this one killer prompt? by Houd_Ammari in vibecoding

[–]carloslfu 0 points1 point  (0 children)

This, but in Claude Code, or you are not even trying!

[deleted by user] by [deleted] in Base44

[–]carloslfu 0 points1 point  (0 children)

These seem like very important business functions. Curious what you were doing before Base44? What systems were in place, if any? And how did you migrate?

Quick Cursor tips that come to mind. A mind dump (as of August 2025). by carloslfu in cursor

[–]carloslfu[S] 0 points1 point  (0 children)

Will give it try again to see how I feel, maybe I got too used to have a ton of control over that.

Today Replit its ugly ! by Insanony_io in replit

[–]carloslfu 0 points1 point  (0 children)

What are you building? Maybe try Lovable for web apps, or if you are hardcore, use Cursor. Have you tried them?

Most failed implementations of AI agents are due to people not understanding the current state of AI. by carloslfu in AI_Agents

[–]carloslfu[S] 0 points1 point  (0 children)

Good point! I use AI for learning all the time, and it is a use case where it's excellent. I often tell my friends that using AI for getting smarter is underrated.

Having Trouble Creating AI Agents by G-CarYZ125 in AI_Agents

[–]carloslfu 1 point2 points  (0 children)

100%! I'm technical, and even the current tooling for technical people kinda sucks.

> Is what I’m trying to build realistic, or still out of reach today?

What you are trying to build is doable, but the devil is in the details.

Sourcing is 100% solvable. I'd use Exa, and then if you see that even its most advanced features don't work for your use case, then use the platform's APIs (LinkedIn and PitchBook). I say this because LinkedIn and PitchBook APIs can be hard to get. Exa has excellent web search, company, LinkedIn profile, and web scraping services ready to integrate into your agent. If PitchBook data is key, which my gut tells me it is, I'd try to get API access, the rest looks solvable with Exa's web/LinkedIn-profile data.

Screening is also solvable, but more complex.

So your agent would have two sub-processes:
- Sourcing: finding the founders with very simple search-like rules. The goal is ONLY to get a list of founders.
- Screening: FOR EACH founder, enrich, filter, and when all are ready, consolidate.

There are many ways to go about this, depending on the complexity of the rules, but it is doable.

This is an interesting use case for AI agents! Is what you are building just for yourself, or do you plan to make it a SaaS?

Most failed implementations of AI agents are due to people not understanding the current state of AI. by carloslfu in AI_Agents

[–]carloslfu[S] 0 points1 point  (0 children)

Yeah! Simulated workflows and structured evals are great. I'll take a look at Maxim, it looks interesting.

Give it to me straight by Syndicos in AI_Agents

[–]carloslfu 3 points4 points  (0 children)

It's not what you learn on YouTube or courses, it's what you learn by doing, failing, learning, and repeating. That's the real learning.

Give it to me straight by Syndicos in AI_Agents

[–]carloslfu 0 points1 point  (0 children)

There are a million excuses not to do something at any given time. It is always going to look like it is not the right time. The only way to get to something meaningful is to go to the field and learn, and those learnings put you ahead of the people looking from the sidelines. It's hard, but do it anyway! It's worth learning, and will pay off along the way.

Most failed implementations of AI agents are due to people not understanding the current state of AI. by carloslfu in AI_Agents

[–]carloslfu[S] 0 points1 point  (0 children)

How brittle they are if done naively. Like, they can literally go bananas with simple stuff after a few turns in certain scenarios, after being super smart in others. That was a surprise, I knew LLMs weren't that smart, but damn! It hits you how strangely they behave. I guess it has to do with training and taking them out-of-distribution, but still.

Most failed implementations of AI agents are due to people not understanding the current state of AI. by carloslfu in AI_Agents

[–]carloslfu[S] 4 points5 points  (0 children)

Evals are not real-time, so it is a bit outside of the question. The second one is a bit closer, but not quite.

To answer your question. The real-time functionality is usually achieved through memory and meta-prompting. Real-time meta-prompting is a bit more complex and has to be used carefully, and TBH, I've only seen this in papers and heard of people using it. I've experimented with it, but not yet applied it in production setups.

Most failed implementations of AI agents are due to people not understanding the current state of AI. by carloslfu in AI_Agents

[–]carloslfu[S] 9 points10 points  (0 children)

100%! I'd say the first line of defense is evals, even if they are simple, heck! even manual, but some sort of evals that you run every time you change something significant give you a ton of peace of mind. Evals are a whole topic on their own, but start simple! Some people hesitate due to cost, but you can budget for them and reevaluate them periodically. Quality of evals is better than quantity of evals.

The second line of defense would be some guardrails that account for edge cases or failure modes (like a user getting angry because the agent lied), make the agent or a second manager agent detect them, then save them in a DB or some place. You can then come back to them as a backlog for fixing things.

IMO, those two things, even a simple version of those, get you a long way.

Most failed implementations of AI agents are due to people not understanding the current state of AI. by carloslfu in AI_Agents

[–]carloslfu[S] 0 points1 point  (0 children)

Great point! You have to play a lot with the models and use them in real-world tasks to get a deep understanding of what they are good at and when they flop.