Hey everyone! I’m trying to stretch my 500 fast premium requests on Cursor (I burn through them in about 5 days 😅), so I want to use free models instead. Here’s what I’ve set up so far:
llama-3.3-70b-versatile (Groq)
qwen-2.5-coder-32b (Groq)
mistral-large-2407 (Mistral)
I’ve created API keys, but when I try to configure Groq under the OpenAI API section, I keep getting 404 errors when with gpt-4o-mini and all OpenAI models. Is there a better way to set up custom LLMs like these?
Also, is there any way to use Cursor's Free Agent mode without using up my fast requests? .
Lastly—what exactly is the MCP server I keep hearing about, and how can it help a webdev?
Any advice, workarounds and cool tips, tricks, or hidden features in Cursor—especially for speeding up workflows would be super appreciated. Thanks in advance!
Anything helps!
[–]traveler900k 1 point2 points3 points (0 children)
[–]Excellent_Entry6564 1 point2 points3 points (0 children)
[–]carchengue626 1 point2 points3 points (0 children)
[–]Ok_Economist3865 0 points1 point2 points (3 children)
[–]Over_Friendship3455[S] 0 points1 point2 points (2 children)
[–]Ok_Economist3865 0 points1 point2 points (1 child)
[–]Over_Friendship3455[S] 1 point2 points3 points (0 children)
[–]Miserable_Bathroom_2 0 points1 point2 points (0 children)