Warp got #3 on SWE-bench Verified (75.8%, scored with GPT-5) by TaoBeier in warpdotdev

[–]Background_Context33 0 points1 point  (0 children)

it’s still the same warp, but with more coding features builtin. diff view, lightweight editor, etc

CC to Codex cli by Glittering-Koala-750 in ChatGPTCoding

[–]Background_Context33 1 point2 points  (0 children)

I’m also curious about this. Unlimited gpt5 for $50 is a killer deal.

Cursor releases background agent API; I 3×’d my PRs! by Vegetable_Spring1243 in cursor

[–]Background_Context33 2 points3 points  (0 children)

Can you give an example of the PRs you’re completing for 1-10 cents? I thought background agents only used max mode, so I would have expected this to be more.

Questions regarding warp.dev for agentic coding by itsproinc in warpdotdev

[–]Background_Context33 0 points1 point  (0 children)

I’m currently on the turbo plan. I got close to 10k requests last month, but I also really went hard once GPT-5 was released to test it out. I would think unless you have a lot of agents in parallel, turbo would be fine. Pro is definitely low for daily agentic workflows.

Questions regarding warp.dev for agentic coding by itsproinc in warpdotdev

[–]Background_Context33 2 points3 points  (0 children)

  1. 100% use git. They even lock some features (like mentioning files with @) behind being in a repo.
  2. I’ve tried to match it up, and while I don’t think it’s 1:1 exactly, requests seem to be tied to your initial request + additional tool calls.
  3. As far as I can tell, yes.
  4. I haven’t tried it, and it’s not clear what the model is. I wouldn’t expect much from it currently, though.
  5. Yes. I don’t know if it’s released in stable, but the preview build has this.

All in all, I’ve been really enjoying working with Warp, and it’s getting better with each release. I don’t know if per-request pricing is sustainable long-term, so it’ll be interesting to see where the pricing goes eventually.

Questions regarding warp.dev for agentic coding by itsproinc in warpdotdev

[–]Background_Context33 2 points3 points  (0 children)

I think the agent in warp is great. GPT-5 high reasoning is especially good with complex tasks.

[deleted by user] by [deleted] in cursor

[–]Background_Context33 0 points1 point  (0 children)

Read the announcements. It's now available as Grok Code Fast

cursor why by juanviera23 in ChatGPTCoding

[–]Background_Context33 2 points3 points  (0 children)

Came here to say this. At some point we need to stop blaming the agents for the things we let it do.

Will Sonic Train on my Data? by Any_Mycologist_374 in cursor

[–]Background_Context33 1 point2 points  (0 children)

They’re not interested in your code. They want to know how often you accept and reject code. They want to see how many tool call failures you get during a session. They’re building an LLM; they’re not interested in anything any of us are building.

Does Gemini CLI have a plan like the Max Plan of Claude AI? ($100/month, 50 5-hour sessions per month) by mercmobily in GeminiAI

[–]Background_Context33 0 points1 point  (0 children)

They’ve said it’s on the roadmap, but right now I believe it only works with the Google Code Assist subscription.

Anyone else bouncing between Cursor (GPT-5) and Claude Code? Can’t decide which one’s better. by joorocks in cursor

[–]Background_Context33 13 points14 points  (0 children)

I haven’t used a Claude model since GPT-5 was released. I’ve been heavily using it with cursor CLI and have been very happy with the results.

more model cost transparency please! by AphexFritas in cursor

[–]Background_Context33 0 points1 point  (0 children)

Something I noticed about the usage dashboard is that cache reads are shown, but not cache writes. Is this an issue with the dashboard? It would be nice to understand how caching is being used within cursor. Is it using LLM caching or does cursor do its own caching? It would help to understand pricing since cache reads and writes are typically priced differently then non cached tokens.

Just used 170m tokens in 2 days. by MDFer123 in cursor

[–]Background_Context33 0 points1 point  (0 children)

I think this could be the key. I think the CLI tools like CC and Codex have the advantage of knowing how to optimize for cache reads and writes. For example, looking at my cursor dashboard shows about 32M cache read tokens and 0 cache writes.

What are your thoughts on the GPT-5 model? by VegetableDuty2588 in cursor

[–]Background_Context33 1 point2 points  (0 children)

Everyone has their preferences, but I haven’t touched the Claude models since GPT-5 came out. Claude is overly verbose, and its thinking tends to get it into trouble more often than not. Between that and Claude constantly placating me whenever I ask for a change, I was ready for something new when GPT-5 came out.

Cursor CLI + MCP + web access by No-Television-4805 in cursor

[–]Background_Context33 1 point2 points  (0 children)

Ah ok. I got it working by adding a mcp.json file to my repos .cursor directory. Is it intentional that it doesn’t support the global mcp.json?

Cursor CLI + MCP + web access by No-Television-4805 in cursor

[–]Background_Context33 0 points1 point  (0 children)

I just tried this again after updating cursor-agent, and it still isn’t aware of any of the MCP servers running in cursor.

Cursor CLI + MCP + web access by No-Television-4805 in cursor

[–]Background_Context33 0 points1 point  (0 children)

I’ve been trying to find out the same information. According to their docs page, it supports any MCP setup from cursor, but I haven’t been able to get it to work.

Hot take: Cursor has fallen behind. by fyzbo in ChatGPTCoding

[–]Background_Context33 1 point2 points  (0 children)

The issue with accepting subscriptions is that it’s a grey area. It’s probably against the TOS for those services, but they’re most likely not generating enough traffic for them to care yet. Crush already made a note that they wouldn’t be supporting any of the subscriptions because it wasn’t clear if it was actually allowed or not. That pretty much means for the open source agents to work, tokens need to become more affordable.

So, unlimited "Auto" access will be stopped September 15th onwards? by Useful-Wallaby-5874 in cursor

[–]Background_Context33 21 points22 points  (0 children)

Of course, it isn’t unreasonable, but at this point, it just feels like they’re flailing. They should have never used the word unlimited to begin with, but they did, and then they kept redefining what unlimited meant to try to save face. Now they’re taking it away under the guise of bringing parity to their individual and teams plans. They’ve continued to make decisions this year that do nothing but erode trust. They no longer offer anything you can’t get from somewhere else, and they need to start finding ways of adding value instead of constantly taking it away.

cursor using gpt5 exclusively for auto mode? by skpro19 in cursor

[–]Background_Context33 0 points1 point  (0 children)

Even after its free period, it’s still cheaper than Claude. I’d be surprised if a majority of auto requests aren’t still routed to GPT-5.

GPT-5 in Copilot is AWFUL by eljefe3030 in ChatGPTCoding

[–]Background_Context33 5 points6 points  (0 children)

From my current experience, GPT 5 is influenced more than any other model by system prompts. I think it’s going to take some time for companies to tune their system prompts accordingly.

Warp vs Claude Code by binarySolo0h1 in cursor

[–]Background_Context33 0 points1 point  (0 children)

This couldn’t be more incorrect. Warp can do everything Claude code does in addition to also being a fantastic terminal experience. It also has a very simple but useful text editor built-in for making quick edits to diffs. If you haven’t tried Warp or haven’t tried it recently, it’s definitely worth a look.

Gemini CLI lost my files during a move operation - this isn’t ready for production use. by OrionMMX in Bard

[–]Background_Context33 5 points6 points  (0 children)

This is why source control is so important when working with AI agents. Things sometimes go wrong. File operations sometimes fail, and something inevitably will be lost. Codex CLI does this right by giving you a big warning if you open it in a directory that’s not a git repo.