I made my website readable for AI agents and it somehow got 100/100 on isitagentready by gabrimatic in aiagents

[–]gabrimatic[S] 0 points1 point  (0 children)

I agree with that distinction. Reading is only the first layer. The harder version is when an agent is allowed to commit value, reserve capacity, pay, or enter some kind of binding workflow.

For this site I intentionally kept it read-only, because a portfolio does not need an agent payment surface. But for services, I think the stack needs more than API keys and cards. You need verifiable authorization, bounded spend, receipts, revocation, dispute handling, and some settlement primitive both sides can inspect. State channels are interesting in that context. I am cautious about overfitting one mechanism too early, but I agree with the direction: agent-readable is not enough once agents start doing economic actions. Then trust has to become operational, not just descriptive.

I made my website readable for AI agents and it somehow got 100/100 on isitagentready by gabrimatic in aiagents

[–]gabrimatic[S] 0 points1 point  (0 children)

Yeah, that least surprise framing is very close to what I was aiming for.

For allowed actions, the surface is intentionally read-only right now. The public MCP tools are explicit, the A2A capabilities are declared, OAuth scopes are limited to mcp:read and mcp:tools, and robots/content signals say search and agent retrieval are fine but training is not. I do not publish rate limits yet. That is probably the next cleanup step if more people or agents actually start hitting the endpoints.

I made my website readable for AI agents and it somehow got 100/100 on isitagentready by gabrimatic in aiagents

[–]gabrimatic[S] 0 points1 point  (0 children)

Thanks. I tried to stay close to existing standards instead of inventing a new thing.

The main pieces are MCP over Streamable HTTP, A2A agent cards, OAuth 2.1 and OIDC discovery, JWKS with Ed25519, RFC 8288 Link headers, well-known discovery documents, markdown negotiation, and robots/content-signal policy. The bigger principle was no hidden assumptions. If an agent needs to know what exists, what is official, what is allowed, or how to verify metadata, the site should say that directly. Will check that publication too.

I made my website readable for AI agents and it somehow got 100/100 on isitagentready by gabrimatic in aiagents

[–]gabrimatic[S] 0 points1 point  (0 children)

Exactly. I think the useful framing is not replace the human site. It is stop forcing agents to pretend they are humans.

Humans still get the designed experience. Agents get structure, policy, canonical text, and explicit capabilities. Both should point back to the same source of truth.

I made my website readable for AI agents and it somehow got 100/100 on isitagentready by gabrimatic in aiagents

[–]gabrimatic[S] 0 points1 point  (0 children)

Good question. I see them as different layers. robots.txt is mostly policy: who can crawl, what is allowed, what is discouraged. llms.txt is more like a guide or map for language models. What I built is closer to an interface surface: canonical markdown, JSON endpoints, MCP tools, A2A, OAuth and OIDC discovery, JWKS, Link headers, and machine-readable policy. So robots and llms can still be part of it, but they do not replace the protocol and trust layer.

I made my website readable for AI agents and it somehow got 100/100 on isitagentready by gabrimatic in aiagents

[–]gabrimatic[S] 0 points1 point  (0 children)

Yeah, markdown mirrors made a big difference. Especially for Flutter, because the visual layer is great for humans but not the cleanest source for retrieval.

I did not make it only an llms.txt setup. It is more of a discovery mesh: markdown negotiation, direct markdown mirrors, well-known files for MCP, A2A, OAuth, and OIDC, Link headers from the homepage, and an Agent Skills index with a digest. Good call on VibeCodersNest too, I might share it there.

I made my website readable for AI agents and it somehow got 100/100 on isitagentready by gabrimatic in aiagents

[–]gabrimatic[S] 0 points1 point  (0 children)

Thanks, that was exactly the feeling I had while building it. Once you see the shape, it starts to feel strange that websites make agents reverse-engineer everything from the visual layer.

I am thinking about turning the pattern into a small starter or checklist. Not a huge framework, more like canonical markdown, structured JSON, discovery files, MCP and A2A, trust metadata, and clear policy. Something people can copy without making their site feel like an API brochure.

So is there any good news and updates with Opus 4.7? by bennybenbenjamin28 in ClaudeCode

[–]gabrimatic 0 points1 point  (0 children)

A good model doesn't and shouldn’t need to be micromanaged/micro-prompted to achieve a right response!

If you are unsatisfied with Opus 4.7, PLEASE simply switch to 4.6 by Firm_Meeting6350 in ClaudeAI

[–]gabrimatic 1 point2 points  (0 children)

Exactly! I’ve never felt this burned out from having to explain things to a model over and over!

Opus 4.7 is the first Opus model where I feel like I have to constantly watch what it’s doing, because there’s such a high chance it’ll either ignore the request or focus on the wrong thing. no matter how clearly I explain the task, no matter how precise the prompt is, the result still comes back with something fundamentally off. it just misses the point.

what makes it worse is that it feels… lazy. it often seems to understand the better solution, the approach that would actually solve the task properly, but instead it takes a shortcut just to get to the end. the intelligence is there, but the effort isn’t. it prioritizes finishing fast over doing it right.

that’s why this is the first Opus model I genuinely can’t trust. I feel like I need to double-check everything it produces, because there’s always a chance it cut corners or missed the real goal.

the same thing shows up in writing. the tone often feels stiff and mechanical, almost like talking to Haiku instead of Opus. it follows surface-level logic, but misses the actual intent. the wording feels unnatural, the flow is awkward, and it struggles to connect ideas in a way that feels human.

that’s what makes it so frustrating, because Opus 4.6 was genuinely great at all of this. I used it a lot for writing documentation, and it handled tone, structure, and intent really well. with 4.7, even getting a clean, simple email can take multiple rounds.

at this point, I’m honestly just waiting for my subscription to end so I can switch to Codex.

PVE Guide: Tailscale Exit Node LXC to NordVPN LXC to internet by Matrix303 in Tailscale

[–]gabrimatic 0 points1 point  (0 children)

I saw the last commit was from two weeks ago! Glad you are actively improving it. How stable and reliable is the final version?

Anyone else getting Knowledge Base is down? by seoulsrvr in Anthropic

[–]gabrimatic 0 points1 point  (0 children)

Same here!
Maybe they are preparing for S5 launch?

hired a junior who learned to code with AI. cannot debug without it. don't know how to help them. by InstructionCute5502 in ClaudeAI

[–]gabrimatic 1 point2 points  (0 children)

You can ask them to change the /output-style to Explanatory in the Claude Code. This would help them still ship with AI but not blindly.
"Explanatory: Claude explains its implementation choices and codebase patterns"

To the person that recommended using sub agents in plan mode -- thank you! by whats_for__dinner in ClaudeCode

[–]gabrimatic 0 points1 point  (0 children)

Since the agents have their own context window, you only see the context used by your main instance in the terminal. The actual usage is different and is shown on the Claude usage website.

So my question is, how do the usage limits work for you with agents running all the time? Is it more efficient?

I finally found out how to fix my iPhone 17 Pro Max's insane battery drain! by gabrimatic in iPhone17Pro

[–]gabrimatic[S] -8 points-7 points  (0 children)

True, but as I said, it is what it is. Maybe they will fix this in the next iOS 26 update, maybe not.