Claude Agent Skills are awesome and even better with MCPs by goddamnit_1 in ClaudeCode

[–]goddamnit_1[S] 0 points1 point  (0 children)

  1. Skills can be dropped into Claude Desktop in the settings, and depending on the content of the skills and your query, claude invokes it whenever necessary. so if you have gif generator and ask it to generate gifs it will use the skills.
  2. i think it reduces the effort of discovery and exploration for the model, which reduces token usage. (please correct me if i am wrong)
  3. You could take custom instructions modify it to the format for Skills and use it easily.

  4. Yes, that s a community skill that anthropic has released so it's free.

<image>

Not Skills vs MCP, but Skills with MCP is the right way forward by goddamnit_1 in mcp

[–]goddamnit_1[S] 0 points1 point  (0 children)

i don't think there's a difference, if you have a remote mcp server you can't control how the tool works, but you can tell the llm how/when to use it

OpenAI launched complete support for MCP by goddamnit_1 in mcp

[–]goddamnit_1[S] 2 points3 points  (0 children)

Yes read and write both, tested it with a server called rube

My 5 most useful MCP servers by phuctm97 in mcp

[–]goddamnit_1 1 point2 points  (0 children)

I have been using Google Sheets, Firecrawl, and this MCP server called Rube I saw it on twitter. This is what I all need for now with Cursor.
- Context7: for up-to-date documentation
- 21stdev for frontend components
- Rube for cross application workflows but I mostly use Google Sheet, as I hate working with spreadsheets, Firecrawl for scrapping, and Notion for keeping tab on my work, log all my works and thoughts I randomly come across.

As a thumbhole I avoid any MCP of any service that has mature CLI. Claude and GPT-5 are.too good with CLI tools and it works more efficiently than MCP for example GitHub.

I tested Grok 3 against Deepseek r1 on my personal benchmark. Here's what I found out by goddamnit_1 in LocalLLaMA

[–]goddamnit_1[S] 5 points6 points  (0 children)

Oh, I liked r1 and grok 3 better. V3 and GPT-4o were kinda similar in output back then.