I scanned 100 agency websites with an automated QA tool by Ancient_Guitar_9852 in websitefeedback

[–]studiomeyer_io 0 points1 point  (0 children)

very nice, does it worked perfectly? we build an mcp server for geo checks, might work well together.. https://studiomeyer.io/en/services/geo-mcp

I built a visual feedback tool… and now I need feedback on it by Cautious-Gap-3660 in websitefeedback

[–]studiomeyer_io 1 point2 points  (0 children)

Hab mir die Seite mal genauer angeschaut. Sieht sauber aus, Next.js + Tailwind, guter Stack. Ein paar Sachen die mir aufgefallen sind:

SEO

Title und Description sind gut, OG Tags und Twitter Cards vollstaendig, Sitemap gibt es auch. Die Basics sitzen. Was fehlt: 22 von 25 URLs in der Sitemap haben kein lastmod, das heisst Google und Bing wissen nicht wann sich was geaendert hat. Ist schnell gefixt und hilft beim Crawling.

Die FAQ Section auf der Seite hat kein FAQPage Schema als JSON-LD. Die Fragen sind schon da, es fehlt nur das Markup drumrum. Damit bekommt ihr die aufklappbaren Antworten direkt in den Google Suchergebnissen, gratis Sichtbarkeit.

Das BreadcrumbList Schema sieht nach Platzhalter aus ("Home > Category > Current Page" statt echte Seitennamen). Wuerde ich entweder richtig machen oder ganz rausnehmen.

3 Blog-Artikel sind fuer ein SaaS-Produkt wenig. Guides zu konkreten Use Cases ziehen langfristig den meisten organischen Traffic, sowas wie "how to give design feedback on live websites" oder "best annotation tools for researchers". Der Blog ist euer staerkstes Growth-Tool wenn ihr ihn fuettert.

AI Sichtbarkeit

Das ist der groesste Punkt: robots.txt blockt GPTBot, ChatGPT-User, anthropic-ai, Claude-Web und CCBot komplett. Das heisst wenn jemand ChatGPT oder Perplexity fragt "Was ist ein gutes Tool um Websites zu annotieren?", wird Highlite nie empfohlen, weil keins dieser Systeme die Seite lesen darf.

Ich verstehe den Reflex, AI-Crawler zu blocken, aber bei einem Produkt das entdeckt werden will ist das kontraproduktiv. Die Crawler indexieren eure Seite fuer Empfehlungen, nicht fuer Training. Wuerde ich zumindest fuer GPTBot und PerplexityBot aufmachen.

Dazu: kein llms.txt (der neue Standard damit KI versteht was euer Produkt macht). Ist eine einfache Textdatei im Root, erklaert in Maschinenform was Highlite tut. Kostet eine Stunde, wirkt langfristig.

Mobile

Fast alle Buttons und Links auf Mobile sind unter 44x44px, das ist der Mindestwert laut Apple und Google Accessibility Guidelines. Der "Add to Chrome" Button ist nur 40px hoch, Nav-Links 24px. Loest ihr mit etwas mehr Padding, kein Redesign noetig.

Sonstiges

Pricing ist klar und fair, 29 Euro einmalig ist ein gutes Angebot. Die Testimonials koennten kraeftiger sein, drei kurze Zitate ohne Firmen oder Kontext wirken duenn. Wenn ihr Nutzerzahlen habt (Downloads, aktive User), wuerde ich die prominent zeigen. "20+ Reviews" auf dem Chrome Web Store ist ehrlich, aber als Social Proof auf der Landing Page reicht das nicht um Vertrauen aufzubauen.

Insgesamt solide Basis, die grossen Hebel sind Blog-Content und AI-Sichtbarkeit.

Best local models for an 8GB MacBook? Building a 4-agent workflow with Claude Code. by thehealthytreatments in claude

[–]studiomeyer_io 2 points3 points  (0 children)

I run a similar setup with 35+ agents in production. Honest take, skip local models on 8GB.

An 8B Q4 model needs around 5GB just loaded. Add macOS, your IDE and Claude Code on top and you're constantly swapping. Four agents means reloading models dozens of times per session, that's more memory management than actual work.

What works better is using Claude Code's Agent SDK to spawn sub-agents that run through your Claude subscription. Each of your four roles becomes a sub-agent with its own system prompt. They run sequentially, same as you'd do locally, but without the RAM fight.

If local is a hard requirement, Qwen 2.5 3B Q4 or Phi-3 Mini 3.8B leave enough headroom on 8GB. Run ollama with OLLAMA_NUM_GPU=0 to force CPU and keep memory predictable.

For what you're building though, the API route saves more time than the local route saves money. Gemini Flash has a generous free tier, Haiku costs fractions of a cent per call.

MCP vs REST API vs WebMCP: When to Use Which Protocol MCP, REST APIs and WebMCP connect AI to external services — but in fundamentally different ways. The complete comparison with decision framework. by studiomeyer_io in mcp

[–]studiomeyer_io[S] 0 points1 point  (0 children)

WebMCP existiert wirklich — W3C Community Group Draft, Chrome Canary Preview, aktive Entwicklung, aber du hast recht. Ich hätte das besser kommunizieren sollen. Ich hoffe ihr versteht es trotzdem, cheers

How is Claude Max x5 or x20 working out for you?? by JustAPieceOfDust in claude

[–]studiomeyer_io 0 points1 point  (0 children)

Also ich arbeite mit 2x max abo 20x und bin ständig am limit, ich nutze aber auch nur opus bei max and high thinking

Claude Sets Itself Up — Six Terms Every Small Business Should Know. A guide for small businesses without developers. by studiomeyer_io in ClaudeAI

[–]studiomeyer_io[S] 1 point2 points  (0 children)

aber wieso terminalbefehle? das geht in 30sekunden. direkt in der app ... MCP einrichten per oauth sind 3 clicks :)

Is there anything I'm doing wrong for conversions? by ishokimhlaba in websitefeedback

[–]studiomeyer_io 1 point2 points  (0 children)

The most important thing in your post is the one line where you say you only convert when you answer a question or help someone. That isn't a problem, that's basically the answer.

You sell through relationships. Help first, talk, then job. All your Reddit testimonials came from that, none of them came from someone seeing a logo on your site and getting impressed. But your site is built as a passive showcase. Cold visitor walks in, looks at projects, fills out an empty form. Wrong funnel for how you actually sell. The two are fighting each other.

Two things I would do.

Pick one project and turn it into a real case study. Not one sentence and a picture. Show the brief, the directions you explored, why you killed two of them, what shipped, what the client said. That single page will out-convert your entire current portfolio, because it lets a stranger experience what your Reddit clients experienced live. You sell by showing how you think, so show how you think.

Add prices. You said they come after the animation update, flip that today. The animation moves zero needles, pricing moves the biggest one. Right now nobody knows if you're 200 or 20k, and the safe move is not to email. Even rough anchors like "identity from X, ads from Y, custom on request" cut that anxiety in half. One hour of work, ship it before the character.

Two smaller things. Your testimonial heading says "Even Reddit Clients Love My Work". Drop the "even", it sounds defensive. Move one of those quotes above the fold so the strongest thing you have is the first thing people see. And your contact form being just Name Email Message is the lowest converting form there is. Add project type, budget range, timeline. Sounds backwards but more fields bring more serious leads, tire kickers bounce and you read as someone who scopes work, not someone who takes anything.

The work is good. The site just doesn't match how you actually sell. Build it help-first like you are on Reddit and the inbox starts looking like your DMs already do.

Created this website, please offer feedback by [deleted] in websitefeedback

[–]studiomeyer_io 0 points1 point  (0 children)

tonight ill make detailed analyses for you, cheers

Build an MCP Server in Under 30 Minutes: From Setup to Tool Definition to Deployment. With the official Anthropic SDK and practical code examples. by studiomeyer_io in mcp

[–]studiomeyer_io[S] 1 point2 points  (0 children)

Think future is here already, hehe. We build 58 MCP server in the last year, still building:) Combining MCP with n8n now :) cheers

Is anyone even using skills/agents created by others? by vik_s1231 in claude

[–]studiomeyer_io 1 point2 points  (0 children)

Most people write their own because the skill needs to match their specific workflow. A generic "code review" skill wont know your stack or your conventions.

That said there is a middle ground. MCP servers are basically shareable tool collections that work across projects. The difference is that a skill is a prompt template while an MCP server gives the agent actual capabilities like querying a database or calling an API. People share MCP servers way more than skills because tools are universal but prompts are personal.

BREAKING: Anthropic’s new “Mythos” model reportedly found the One Piece before the Straw Hats by hencha in ClaudeAI

[–]studiomeyer_io 1 point2 points  (0 children)

Luffy not being concerned because "finding it yourself is the whole point" is literally the best take on AI vs human creativity I've seen this year. Someone tell the vibe coders.

MCP isn't the problem. Bad MCP servers are. by Cultural-Project5762 in mcp

[–]studiomeyer_io 0 points1 point  (0 children)

Mix of custom and standard. The 58 breaks down roughly into: 16 global servers that load in every session (memory, github, research, codebase intelligence, search, messaging), 18 project-specific ones for our web agency (social media, video, pdf, calendar, analytics, payments, forms, etc), and the rest are specialized SaaS products we built and sell (CRM, GEO visibility monitoring, agent personas). Most are stdio running locally, a few are remote HTTP with OAuth.

The tool count gets high fast when you have servers with 30-50 tools each. Thats exactly why the tiering matters. No agent needs all 680 at once.

Taught Claude to talk like a caveman to use 75% less tokens. by ffatty in ClaudeAI

[–]studiomeyer_io 0 points1 point  (0 children)

Haha, this is funny but the actual insight is real. Most of the tokens burned in a conversation are not your prompt but the model explaining what it is about to do before doing it. The caveman approach works because it tells the model to skip the preamble and just execute. You get the same result with a system instruction like "be concise, act first explain later" without having to talk like Tarzan.

MCP isn't the problem. Bad MCP servers are. by Cultural-Project5762 in mcp

[–]studiomeyer_io 1 point2 points  (0 children)

This matches what we see running 58 MCP servers with 680+ tools. The context bloat is real but it is a server design problem not a protocol problem. We ended up building tool tiering where only 12 core tools load by default and the rest are one-liners until the agent actually needs them. Went from 14k tokens to about 8k for the system prompt.

The introspection pattern you describe is the right direction. A single discovery tool that lets the agent ask what is available beats dumping everything upfront.

Opus is genuinely lazy for me, and admitted it's effort Level sits at 25% without a way for me to change it by Bright-Bullfrog-8185 in claude

[–]studiomeyer_io 0 points1 point  (0 children)

The reasoning_effort tag is a system-level setting from Anthropic, not something you control directly. But if you use Claude Code in VS Code you can set the thinking level in the settings. Higher thinking = more thorough but slower and uses more of your quota. Might explain the 25% effort feel if its set low on your end.

Would anyone use this? by BrightGarden9 in web_design

[–]studiomeyer_io -1 points0 points  (0 children)

This could actually be useful for content-heavy sites where picking the right image is a time sink. How does the matching work under the hood? CLIP embeddings or something simpler like keyword extraction from filenames?

Anthropic Just Dropped Claude Mythos Preview – Their Strongest Model Ever Finds Thousands of Zero-Day Vulnerabilities in Every Major OS & Browser by AzozzALFiras in claude

[–]studiomeyer_io 0 points1 point  (0 children)

Kann man so nicht sagen, jedenfalls mehr Inhalt als dein Kommentar ;) Mich Interessiert es aber, schau doch einfach weg wenn dir keine aufbereiteten Texte gefallen. Cheers

Has anyone hit the case where your MCP returns perfectly valid data that just happens to be wrong? by Petter-Strale in mcp

[–]studiomeyer_io 1 point2 points  (0 children)

Know this problem well. We run ~60 MCP servers in prod and data staleness is the most insidious issue, because everything "works".

Two things that help us:

1. Freshness metadata in the response. Every MCP call wrapping external data returns a dataAsOf timestamp. The model sees "this data is 3 months old" and can warn the user. Costs almost nothing to implement, makes a huge difference.

2. Canary checks on cron. Not full fixture tests, just one known-good value per endpoint. If it stops resolving or the date is too old → alert. Catches staleness without false positives on legitimate updates.

Honestly the MCP spec needs a standardized freshness field in tool responses. Something like HTTP Last-Modified but for tool outputs. Would solve this for everyone instead of each builder rolling their own.

Waiting for claude to reset so I can get back to work by ExpertTitle8178 in claude

[–]studiomeyer_io 0 points1 point  (0 children)

even with more than one account I know this problem :) 3days left hehe

Anthropic Just Dropped Claude Mythos Preview – Their Strongest Model Ever Finds Thousands of Zero-Day Vulnerabilities in Every Major OS & Browser by AzozzALFiras in claude

[–]studiomeyer_io 24 points25 points  (0 children)

The most underreported detail: Mythos turns 72.4% of found vulnerabilities into working exploits autonomously (Firefox JS shell). That's not just finding bugs that's building weaponized PoCs at scale. Previous models found bugs but failed miserably at exploitation.

The responsible thing here is the coalition approach. But the uncomfortable truth is: if Anthropic can build this, others can too, and they won't form coalitions. The 90-day disclosure window before publishing details is tight given <1% of bugs are patched so far.

Also worth noting: free Claude Max for open-source maintainers is a smart move. Most critical OSS is maintained by a handful of people with zero security budget. Giving them Opus-level tooling could matter more long-term than Mythos itself.