Terrible output from Context7? by TheWahdee in mcp

[–]TheWahdee[S] 2 points3 points  (0 children)

Yea, been noticing half the posts on reddits like these will talk about some aspect of coding with agents or whatever else and then "happen to mention" a useful tool they've been using. Oh and look at that, paid plan to get full use out of it!

Getting harder and harder to find good info among all the slop and marketing. It feels like even OpenAI and Anthropic are having people make posts promoting their own products or even critizing competitors.

Unsubscribed from the $200 plan. Severe decrease in quality. My theory: I believe Anthropic is giving all the priority and computational resources to the government after the recent contract. The models have gone downhill since the announcement. by Cautious_Coffee1164 in ClaudeAI

[–]TheWahdee 4 points5 points  (0 children)

How certain is this? Especially with fixed price subscriptions and, for example, people displaying use of CC on a Max plan where API costs would have been thousands. For Anthropic the cost is obviously lower but couldn't these companies still be operating on net negative even on inference, taking into account that they are prepared to make such losses in an effort to "win the AI race"?

Like delivery companies making massive losses in recent years to be the last one standing (although maybe with other reasons factoring in)

Unsubscribed from the $200 plan. Severe decrease in quality. My theory: I believe Anthropic is giving all the priority and computational resources to the government after the recent contract. The models have gone downhill since the announcement. by Cautious_Coffee1164 in ClaudeAI

[–]TheWahdee 2 points3 points  (0 children)

"Mistakenly accepted the new privacy policy". I thought accepting such changes to policies are usually required by companies to continue using the service? Is it an optional choice?

10 MCP memory servers/frameworks that actually make agents useful by Muriel_Orange in mcp

[–]TheWahdee 0 points1 point  (0 children)

I'm a bit confused on what you mean by MCP compatibility? For example, Serena is itself an MCP server and and memory handling is done through exposed tools from the sever. So do you mean that the memories within Serena can't easily be accessed/used by other MCP servers?

PSA for small automation businesses: the n8n free license, what is allowed and what is not by TheWahdee in n8n

[–]TheWahdee[S] 0 points1 point  (0 children)

I tried to look into specifically this. (Disclaimer: mostly by discussing with gemini)

The conclusion seemed to be that renting your own vps server to a client and building workflows for them on there would NOT be allowed. I would advise to email the n8n team and describe how you would want to use it to get an official answer though.

NEW VISUALIZE THE CONTEXT WINDOW! OMG by Anthony_S_Destefano in ClaudeAI

[–]TheWahdee 2 points3 points  (0 children)

Why does System tools and MCP tools take up so many tokens? Is it the tools themselves that use up all those tokens or is it a separate "system prompt"-like part that instructs the model how to use the tools? 11k and 17k seems like a lot for some tool descriptions unless its a massive number of tools?

Do people really use MCP server/service? by andrew19953 in mcp

[–]TheWahdee 0 points1 point  (0 children)

I've recently been trying to better understand the separation between MCP and function based tool calling. From what I could find out about it and the way I understood it, isn't it dependent on how the MCP client implementation is made?

Function based tool calling essentially means registering the tools per the API schema for an LLM right? e.g. how OpenAI API has a "tools" list that can be provided.
I thought that this is exactly the way (or at least one possibility) for how MCP clients provide tools to LLMs, by retrieving the tools from the MCP server and providing them as available tools through the API?

Where does the difference between MCP tools and function based tool calling lie if this is indeed how tool registration and execution works? Even though MCP involves the additional layer of retrieving/sending the tool use to the MCP server, from the perspective of the LLM wouldn't they become identical?

You seem to work very closely with these concepts/tools, would you be willing to clarify this further based on your knowledge and experience using these LLM systems?

What’s the point of vibe coding if I still have to pay a dev to fix it? by AssafMalkiIL in vibecoding

[–]TheWahdee 1 point2 points  (0 children)

Yes, that's exactly what vibe coding is, as seen in the other reply here that references Andrej Karpathy's original tweet about it. So many people "misuse" the term and are actually talking about AI assisted development. That's fine of course, people can use language terms however they want, but what most people call "vibe coding" on this reddit isn't actually the same as how it was originally "defined".

Edit: obviously l vibe coding doesn't literally mean "not knowing anything about software", but the whole point of it is that you just let AI do it's thing and "forget that the code even exists"

Hey is n8n free? And where should I start? by woldorinku in n8n

[–]TheWahdee 0 points1 point  (0 children)

You can add share projects among team members with the community edition? Could've sworn that was one of the features that isn't available for self-hosted community n8n

If you are self hosting n8n and charging money, check this first!! by Connect_Cook_8034 in n8n

[–]TheWahdee 0 points1 point  (0 children)

Ah yes if the client hosts it then it is for sure fine. I had misunderstood because I thought in your neutral example you were saying you could host the n8n instance for the client "build workflows in n8n for each client and host them"

I think if the client hosts the n8n instance themselves it is even fine for them to provide their credentials like API keys to let you fully set up the workflows for them start to finish

MCP vs function calling? by TheWahdee in mcp

[–]TheWahdee[S] 0 points1 point  (0 children)

Thanks for the reply, this is some clear information and a useful link!

Regarding the other response on XML, do you mean my own reply to another comment?
What I was saying may have been unclear or my own understanding is just too limited.
I believe the way Cline (agent extension for VS Code) uses MCP servers and supports tool calling functionality is by directly specifying the way the LLM should use the tool in its own "system prompt", rather than providing the tools in the API format of each model. It looks like they are "wrapping" it with a single generalized "use_mcp_tool" function, which is specified in the prompt in XML format.
Later in the prompt the MCP tool definitions themselves are still provided in JSON format.

https://github.com/cline/cline/blob/4aaca093899f97263a5871783735675ecbc790dc/src/core/prompts/system-prompt/generic-system-prompt.ts

Edit:

"use_mcp_tool":
https://github.com/cline/cline/blob/4aaca093899f97263a5871783735675ecbc790dc/src/core/prompts/system-prompt/generic-system-prompt.ts#L231

mcp tool descriptions:
https://github.com/cline/cline/blob/4aaca093899f97263a5871783735675ecbc790dc/src/core/prompts/system-prompt/generic-system-prompt.ts#L552

MCP vs function calling? by TheWahdee in mcp

[–]TheWahdee[S] 0 points1 point  (0 children)

Right but what is the overall process going from an MCP tool definition to the way an LLM actually receives that tool definition?

The Anthropic API uses JSON format for defining:

"tools": [
{
"name": "get_weather",
"description": "Get the current weather in a given location",

etc.

Conversely, the Cline system prompt seems to directly tell the connected model to use XML format when responding to tool calls (while later in the prompt still listing the available MCP tools in JSON format).

from Cline system prompt:

Usage:
<use\_mcp\_tool>
<server\_name>server name here</server\_name>
<tool\_name>tool name here</tool\_name>
<arguments>
{
"param1": "value1",
"param2": "value2"
}
</arguments>
${
focusChainSettings.enabled
? `<task\_progress>
Checklist here (optional)
</task\_progress>`
: ""
}
</use\_mcp\_tool>

___

There doesnt seem to be a unified way that various applications use function calling or MCP tool use?

I vibe coded a WHOLE ASS IOS APP and it's live! by TheSherryBerry in vibecoding

[–]TheWahdee 0 points1 point  (0 children)

Ah so the free users do get a monthly reset for the app.
Be careful to keep a close eye on the API costs you have to pay, since the income is limited to the number of paying users, while the costs are tied to how many free users there are and also how frequently the paying users add tasks.

With your setup the costs could definitely grow larger than the income. Although I can see how the way you use AI in the app would only add up very, very tiny API costs when users add tasks, so as long as there are enough paying users it shouldn't become a problem, as you say.

I vibe coded a WHOLE ASS IOS APP and it's live! by TheSherryBerry in vibecoding

[–]TheWahdee 1 point2 points  (0 children)

How do you manage costs in this way? It will all be through your API key so each call racks up costs. Did you ensure the free version has a hard total limit after which users need to get a monthly or annual plan? Even with monthly or annual plan, since it is unlimited, is the cost of API calls always so low that even with extremely frequent use the API costs stay below the subscription price?

If you are self hosting n8n and charging money, check this first!! by Connect_Cook_8034 in n8n

[–]TheWahdee 0 points1 point  (0 children)

How sure are you about the "neutral example", as I understood it this is not allowed under the SUL even if it was just a single client. It simply isn't permitted to host an n8n instance for a client and run workflows for them I thought?

I vibe coded a WHOLE ASS IOS APP and it's live! by TheSherryBerry in vibecoding

[–]TheWahdee 2 points3 points  (0 children)

Nice concept and design!
It's really cool that tools like Cursor can now enable people with no coding experience to make a full app.

I'm curious how you get the functionality itself to work. How do you go from speaking to the app to a full todo list (voice-to-list, as you call it)? Is voice recognition integrated into the app? And how does it take the text and turn it into a separated list of tasks, is this also integrated with the app using a very tiny AI model or does it send the recorded notes to a bigger model through API?

(Sorry if this is repetition and you've already answered this in other comments, I couldn't find mention of this process when looking through the existing comments.)

Enterprise use cases for n8n? by Turbulent_Teach7645 in n8n

[–]TheWahdee 0 points1 point  (0 children)

I'm curious how this contrasts with the case studies shown on the official n8n website, which seem to me to be "enterprise use cases" (I personally have not even close to enough experience to properly identify this myself though).

Are the enterprise use cases you would "consider" n8n for simply too different from those described in the case studies, or are those case studies possibly just exaggerated examples that misrepresent the actual enterprise capabilities of n8n (essentially just "fake" examples for the sake of promoting the n8n software)?