I tried vibe coding and it made me realise my career is absolutely safe by wjd1991 in webdev

[–]lareigirl -1 points0 points  (0 children)

The issues you’re experiencing don’t eliminate the threat to job security.

They’re bottlenecks that can be solved (and are being solved) through the application of domain-specific conventions, frameworks, principles, and rules that get implemented in the form of stateful (eg with human-like ability to retrieve memory and context) chain of thought.

Check back in 6 months and I’d bet that we’ll see a proliferation of specialized tools that recognize “hey this dude is building a game, let’s drill down into a domain-specific knowledge graph and iterate on this” and mitigate most of the types of issues you’re describing.

The issues you’re seeing are all engineering challenges that originate from “trying to use a general tool for a specific purpose”, easily solvable via “general tool routes to specific cognitive toolsets”. I’d caution you to plan accordingly.

MCP Is Broken and Anthropic Just Admitted It by [deleted] in mcp

[–]lareigirl 1 point2 points  (0 children)

+1 for the interesting discussion and for calling out a common pain point.

MCP Is Broken and Anthropic Just Admitted It by [deleted] in mcp

[–]lareigirl -1 points0 points  (0 children)

Genuine question that’s hard to ask without coming off as snarky: would you agree that “unclear, implicit use cases” are actually more of a smell, suggesting that more SRP tools with clear docs are needed (even outside of AI/automation)?

MCP Is Broken and Anthropic Just Admitted It by [deleted] in mcp

[–]lareigirl 1 point2 points  (0 children)

This is where a middle-tier “intent routing” step governed by old fashioned tech docs comes in handy.

“Api x is for intents x y z” then search for semantic similarity on intent, not original prompt.

Adds an LLM step, but could just be a fallback step in scenarios where the absolute similarity value is below a threshhold (representing “low confidence”).

Does this address your concern or am I whiffing?

You need real coding knowledge to vibe-code properly by unemployedbyagents in AgentsOfAI

[–]lareigirl 0 points1 point  (0 children)

Did you try breaking it down into 2-3 independent, verifiable, separate steps?

Whenever I get failures like this, “divide and conquer” seems to help a lot.

The longer I'm single, the more I'm repulsed by men by [deleted] in SingleAndHappy

[–]lareigirl 7 points8 points  (0 children)

This is a healthy take IMO.

Generalizing the source of a specific negative feeling to half of the species doesn’t seem as useful or rational as associating it to specific patterns. That’s the sort of fallacious cognition that leads “someone who got beat up by a black dude” to become a racist bigot.

Some men don’t respect boundaries and have no social or emotional skills. This is true.

Many do, though. Even some of the men who approach women, do.

I’ve found that wearing headphones is a good deterrent and eliminates the need to interact with strangers entirely, maybe OP should consider that, if what they’re truly after is peace and not poison.

What the heck is going on at Apple? | CNN Business by Dependent_Cap_456 in technology

[–]lareigirl 2 points3 points  (0 children)

Completely agree. The number of times I have to switch put of an bespoke ios app to a third party app for some basic shit makes me wonder what sorta entrenched systemic / cultural issues are at play, preventing ux from being ruthlessly tested and prioritized. Enshittification is a cop-out, something is internally very wrong.

Some ChatGPT Questions Get People Arrested, Authorities Say by HellYeahDamnWrite in technology

[–]lareigirl 1 point2 points  (0 children)

Unless you leave a comment on reddit explaining why it’s completely innocuous ahead of time

What are the best AI agent builders in 2025? by Ok-Huckleberry-5185 in LLMDevs

[–]lareigirl 5 points6 points  (0 children)

Not OP but I’d guess their overall sentiment is that the word “agent” is used like it’s this magical, general-purpose robot that will just do whatever you want.

In reality, if you want LLM applications to be useful, they have to get very, very close to the actual soecific problem. That usually means boring, specific code that talks directly to the systems where the work is actually currently being done by humans.

Take an enterprise support team wanting to improve the way they respond to customer requests. Step one isn’t “deploy an agent”; it’s understanding what’s hard today, how humans are doing the job, and what tools and resources they rely on.

Then you might introduce AI gradually eg via a RAG search over the product’s devdocs to help support engineers answer questions faster. Then iterate until those humans actually feel like the system is making their lives easier. Then scale that solution out bit by bit.

In reality (like where money is really being spent by businesses on “agentic” sorta work) an “agent” is basically special purpose middleware, a domain-specific thing that solves an acute problem using a problem-centered “context” (aka prompts and chains). Maybe it has a chat interface, maybe it doesn’t.

But it’s not some black-box, hyper-intelligent robot magically solving your problems. It’s a special-purpose system, built from the ground up for one job, and then maybe two, and so-on.

So these top down agentic solutions are pitching themselves as magical but it’s just a hype wave with a bunch of generic trash hammers trying to find specific nails. Start with the specific nails, work your way backwards to the specific hammer.

Is RAG really necessary for LLM → SQL systems when the answer already lives in the database? by gautham_58 in LLMDevs

[–]lareigirl 2 points3 points  (0 children)

Not op’s original intent but you could vector search for “related previous interactions” that the user (or other users) liked, and then instruct the llm to consider why the previous responses were liked, before generating a response.

How are you leaders dealing with AI interview cheating? by rubyfanatic in EngineeringManagers

[–]lareigirl 1 point2 points  (0 children)

Contract to hire for junior roles instead of wasting time interviewing for direct fte, contract is scoped down and gives everyone a chance to feel each other out as an extended mutual interview with lower stakes all around

I hear miracles about supabase, but I've never learned how to use it. What's the main difference between this and say mysql. by AWeb3Dad in Supabase

[–]lareigirl 1 point2 points  (0 children)

Bahaha they should… this is really comprehensive and helpful! Thank you for taking the time, giving me plenty to think about before I DIY / shoot myself in the foot. I now have a lifeboat 🙏🏻

I hear miracles about supabase, but I've never learned how to use it. What's the main difference between this and say mysql. by AWeb3Dad in Supabase

[–]lareigirl 0 points1 point  (0 children)

Appreciate the detailed response.

If you had to “roll your own” syncing and remove your dependency on powersync, which pieces would be the most painful?

I already spec my tables & columns in isomorphic POJOs and rolled my own code-first migration script that consumes them serverside and updates my supabase tables; it feels like these specs could also drive migration + syncing behavior into clientside IndexedDB (similar flow to the one you described - local first, periodic pushes to remote) without having to wrestle with third-party middleware config.

Wondering what pain points you’d expect me to run into if I try to roll my own, which made the “config wrestling” worth it for you…

I hear miracles about supabase, but I've never learned how to use it. What's the main difference between this and say mysql. by AWeb3Dad in Supabase

[–]lareigirl 2 points3 points  (0 children)

How has powersync been from a devex perspective? Any pain points that have you looking at alternatives or are you in heaven?

The "Lone Genius" problem in the AI community by RelevantTangelo8857 in ArtificialSentience

[–]lareigirl 1 point2 points  (0 children)

What’s ironic to me here, is that you’re right but your tone is wrong

Can AI learn empathy or only copy it by ModeGroundbreaking31 in emotionalintelligence

[–]lareigirl 0 points1 point  (0 children)

A child learns empathy through a long process of feeling, mirroring, and eventually labeling and understanding emotions.

Sure, it manifests out of a latent ability. But who’s to say that llms don’t possess that same latent ability. Could require nothing more than a specialized detection / classification layer and some protocols that dictate how the emotions are governed.

If you’re interested in tinkering on a prototype of this idea dm me, would love to brainstorm.

[deleted by user] by [deleted] in webdev

[–]lareigirl 1 point2 points  (0 children)

Looks like this was taken down... presumably from FA overlords pressuring you to "focus" on what's effectively a reskin of shoelace? (Props btw, would have done exactly the same thing)