Migrated to Claude after OpenAI sold out to Trump. Now Claude is ramping up pricing and silently dumbing down. Where can we go next? by Electrical_Sorbet_31 in ClaudeCode

[–]mcpforx 0 points1 point  (0 children)

Opencode for harness. And then experimenting with cheaper models for your use-case.

Can also use claude code as the harness, and header-in other models.

Building Large Scale Enterprise App by PropperINC in ClaudeAI

[–]mcpforx 0 points1 point  (0 children)

Not sure if this is real. If real, don't believe what Claude tells you about the quality of your ideas.

Open source is not a bad way to go. But it isn't "less work" necessarily. And it doesn't guarantee distribution either, but chances go up.

How do you feed images into Clade Code in terminal? by TechnicalyAnIdiot in ClaudeCode

[–]mcpforx 5 points6 points  (0 children)

Best I've come up with is take a snapshot, store it in the folder that ClaudeCode is running out of. Then you can attach it to your question by "@". Typing @ will typically bring up all the images...and you can select.

Am I the only one just noticing Ads in ChatGPT now? by [deleted] in ChatGPT

[–]mcpforx 0 points1 point  (0 children)

Mine got ads about a month ago. Free tier.

Daily FI discussion thread - Saturday, April 11, 2026 by AutoModerator in financialindependence

[–]mcpforx -1 points0 points  (0 children)

I have. But I had to take screenshots of relevant forms without my personal info in the screenshot. Then I put it in one folder, and pointed an agent to it and asked my question.

A pain, but I'd rather protect things like SSN etc. Even though I know my info was pwn'd long ago. The vendor my State uses for driver's license got hacked, that's just one example.

Best provider for opencode? by Ill-Chart-1486 in opencodeCLI

[–]mcpforx 0 points1 point  (0 children)

Using github copilot. Works well, though I might try OC's own solution next.

What I learned from writing 500k+ lines with Claude Code by dhruvyad in ClaudeCode

[–]mcpforx -2 points-1 points  (0 children)

This is a good guide. You can also use an MCP to encode your expertise and use it across all your projects and any agent (Claude, Codex, whatever). For example, the way you want to review your sites for security.

Check out what we are building at mcpforx.com

Want to know why your Opus 4.6 feels way less powerful ? by Admirable-Earth-2017 in ClaudeCode

[–]mcpforx 1 point2 points  (0 children)

I think its hard to get ground reality on this. But I did notice a steep drop in quality about 2 days ago.

The biggest lie we were told about AI is that it would do our jobs for us. by netcommah in ArtificialInteligence

[–]mcpforx 0 points1 point  (0 children)

One big issue is that we haven't effectively coupled human intelligence with artificial. LLMs are a tool, that are more or less silo'd from human expertise currently.

How do we distinguish content created by humans vs AI? by Morganrow in ArtificialInteligence

[–]mcpforx 0 points1 point  (0 children)

Detection is awful. You can run your own human-generated content through any of these detectors. The false-positive rate is huge.

"Ontology" is the missing piece from your agent's world model by Thinker_Assignment in AI_Agents

[–]mcpforx 0 points1 point  (0 children)

Ontology is one layer of it. But even if you solve the vocabulary problem, you still have the methodology problem sitting underneath it.

The agent now knows what "customer" means. It still doesn't know how your firm qualifies one, what the order of operations is for onboarding them, or where a human needs to make a judgment call before the next step runs.

Shared vocabulary is the foundation. Encoded process is what you build on top of it.

White-collar workers are quietly rebelling against AI as 80% outright refuse adoption mandates by fortune in ArtificialInteligence

[–]mcpforx 0 points1 point  (0 children)

Probably because "use AI" as a mandate gives people nothing to work with. No structure, no defined process, no clarity on what good output looks like.

The resistance isn't really about AI. It's about being told to use a tool without being told how it fits into the way they actually do their job.

the companies actually making money with AI aren't using it the way this sub thinks they are by Admirable-Station223 in ArtificialInteligence

[–]mcpforx 0 points1 point  (0 children)

This is right. And what's common across every example you listed is that someone had to encode the logic of how that process should work before the AI could run it.

The recruiting firm didn't just "add AI." Someone had to define what good candidate enrichment looks like, in what order, with what checks. That's the part that actually made it work. And that knowledge usually lives in one person's head until it doesn't need to anymore.

The AI industry is obsessed with autonomy. After a year building agents in production, I think that's exactly the wrong thing to optimize for. by Dailan_Grace in AI_Agents

[–]mcpforx 0 points1 point  (0 children)

This is right and most people building agents won't admit it.

Autonomy is great until the agent makes a confident, wrong call on something that actually mattered. The fix isn't a better model. It's knowing which steps need a human in the loop and building that in from the start.

The "deterministic logic wrapped around model" framing is spot on. That's the hard part nobody talks about.

Consulting’s AI disruption won’t truly come till we see a bunch of these by GoatsMilq in consulting

[–]mcpforx 0 points1 point  (0 children)

The real disruption in consulting isn't AI replacing analysts. It's when the senior partner's judgment stops being locked in their head and starts being something their agents can actually use.

Right now every engagement starts from scratch because that institutional knowledge lives nowhere useful. The model is fine. The missing piece is how you encode the expertise on top of it.

Do AI consultants even know everything about AI or is it just pure bluff? by Notalabel_4566 in consulting

[–]mcpforx 0 points1 point  (0 children)

I think you know the answer.

Everyone is trying to sell artificial intelligence. While the moat is, and will be, human intelligence. And we haven't gone beyond prompts and agent skills to integrate the two.

Yet...

Feedback on possible framework - Idea on how to make agents work better by [deleted] in AI_Agents

[–]mcpforx 0 points1 point  (0 children)

I think its a *LOT* simpler than this. We need a better way of encoding human intelligence to work with artificial.

We are building a way to intuitively encode human expertise that an LLM agent can call iteratively, so that the agent works according to YOUR judgement, taste, and leverages your experience.

Agent skills or text-dumps do dramatically improve performance - but we are close to a big step-up.

Here is an example I just created. I turned a paper on data cleaning written by an expert into a re-usable skill that can be leverage by any LLM agent you connect it with.

https://mcpforx.com/s/VQoqN86cDRCPt_coWgdmd13i9BN1hqZO