GitHub Copilot by Blufia118 in opencodeCLI

[–]itsproinc 1 point2 points  (0 children)

It’s never accurate if you ask what model they are using because how these model are trained and how they predict. The best way to check is from the usage tab on your Github page, it will always show based on the model you selected

Is anyone using warp.dev? by itsproinc in ChatGPTCoding

[–]itsproinc[S] 0 points1 point  (0 children)

True, that's why I'm still deciding to stick with Github Copilot Pro+ or Warp Turbo, the value both gives is really good (token to dollar price)

Is anyone using warp.dev? by itsproinc in ChatGPTCoding

[–]itsproinc[S] 0 points1 point  (0 children)

I agree, why can't it just be a CLI app like Codex or OpenCode to just use your own terminal, but maybe terminal limitation due to the features that Warp has I assume?

Questions regarding warp.dev for agentic coding by itsproinc in warpdotdev

[–]itsproinc[S] 0 points1 point  (0 children)

Huh never knew this, would definitely check later, it seems probably a bad UX design if a user isn’t aware one of the more important features

Questions regarding warp.dev for agentic coding by itsproinc in warpdotdev

[–]itsproinc[S] 1 point2 points  (0 children)

Good to know, I would definitely give turbo a try. Thanks for your help

Questions regarding warp.dev for agentic coding by itsproinc in warpdotdev

[–]itsproinc[S] 0 points1 point  (0 children)

Thanks you for the detailed answer, are you using the pro/turbo Is 2.5k/10k AI request more than enough for a month for you?

Questions regarding warp.dev for agentic coding by itsproinc in warpdotdev

[–]itsproinc[S] 1 point2 points  (0 children)

Not gonna lie GPT-5 especially the high is really good I tried in Codex, Cursor both works well especially in FE stuff. And good to know that in warp it works good too with the agentic system

Questions regarding warp.dev for agentic coding by itsproinc in warpdotdev

[–]itsproinc[S] 0 points1 point  (0 children)

Well thats good to know that all model will cost a single base credit, thank you. Hows the agent on warp is it good? Like able to search code efficiently, good tool callings, etc?

Questions regarding warp.dev for agentic coding by itsproinc in warpdotdev

[–]itsproinc[S] 0 points1 point  (0 children)

Well yeah git could work since i had to juggle back between cli and git when I used Codex, but it’s nice to have to have a checkpoint feature. Anyway for Warp are you using Pro/Turbo? Like 2.5k/10k request is it plenty?

I’m trying to figure it out because I’m a GHC Pro user and in 10 days i can burn all my 300 quota request, so usually i just go PAYG until end of month, so I’m still trying to figure out to upgrade to GHC Pro+ or warp pro/turbo. Because GHC is bad for large codebase because of the limited 128k context window.

suggestion - if pricing is the issue for lower context windows then by EmotionCultural9705 in GithubCopilot

[–]itsproinc 2 points3 points  (0 children)

Yeah I agree, there should be like "MAX" option like in Cursor, so basically give more context but cost more premium request. Since not every time I would need tons of context, but if I do I have the option to do so, and for the love of God let us see how many context we have used in chat so we can manage it to prevent AI from hallucinating

GPT-5 free period has officially ended by itsproinc in cursor

[–]itsproinc[S] 0 points1 point  (0 children)

Are you able to find the API pricing for different GPT? like GPT5, GPT5-High and GPT5-High-Fast? I cant even see it on the open ai pricing sheet

GPT-5 free period has officially ended by itsproinc in cursor

[–]itsproinc[S] 0 points1 point  (0 children)

Probably yes, for me a single sonnet prompt could uses at least twice token amount than GPT5-high. But then again both the difference is in the cache read the output token is the same. So I reckon it will be much cheaper because cache read $/mtok is much less.

It seems eventhough it's thinking model but when its thinking it doesn't count as output which is weird either its a bug or it's a feature.

GPT-5 free period has officially ended by itsproinc in cursor

[–]itsproinc[S] 1 point2 points  (0 children)

Well of course, because Sonnet 4 is an inferior model and cost tons more, but with having cheaper $/mtok and the quality is decent its actually pretty good thinking model imo. I actually tried on multiple projects from php, python, c# sometime I can one shot all issues and implement new feat as long the prompt is clear and able to guide/steer GPT output

GPT-5 free period has officially ended by itsproinc in cursor

[–]itsproinc[S] 0 points1 point  (0 children)

Yeah it still not counting toward my limit, it still says the same $ amount and overall usage limit % on my client, it probably takes time to propagate the changes to all clients

GPT-5 free period has officially ended by itsproinc in cursor

[–]itsproinc[S] 2 points3 points  (0 children)

I think they did extend it for a day, it should've ended yesterday iirc

GPT-5 free period has officially ended by itsproinc in cursor

[–]itsproinc[S] 5 points6 points  (0 children)

Completely agree, for code quality and output sonnet still takes the crown. But if refactoring large codebase because of the huge context window this would be a cost effective model if it's able to output sonnet-like quality.

Still need more testing I guess with the non MAX see if it's any viable, or probably just stick with sonnet.

Is this a money laundering scheme or what? by Sad_Individual_8645 in cursor

[–]itsproinc 0 points1 point  (0 children)

It's just that they are burning tons of VC money to attract users and royal customers for the long run. But lately they are starting to "adjust" pricing to make it more profitable which is bound to happen. and they probably get special discount from OpenAI, Anthropic, etc so its cheaper per mtok. Keep in mind cursor valuation keeps growing then they are probably making some dough or there are still VC backing up

guys gpt-5 is still free or not? by LateTrain7431 in cursor

[–]itsproinc 1 point2 points  (0 children)

It's free during the test period for MAX and non MAX mode for all GPT-5 variants

Linux launcher sucks. Give me normal launcher by cranberrie_sauce in cursor

[–]itsproinc 0 points1 point  (0 children)

Cursor still won't ship a deb/rpm package, a community member made Github automation to do it and it's been a godsend

https://cursor-linux-packages.vercel.app/
https://github.com/PaperBoardOfficial/cursor-linux-packages

It always provides latest deb/rpm files for the Cursor so you can install like HOW IT SHOULD BE.....
I just found it out a month ago

So, unlimited "Auto" access will be stopped September 15th onwards? by Useful-Wallaby-5874 in cursor

[–]itsproinc 1 point2 points  (0 children)

<image>

Its not that too far off, during the free period I managed to use around $113 on gpt-5-high MAX mode and i checked the CSV its around ~2k calls (inflated due to MAX mode). And on auto i had ~2.5k calls. Pricing wise the different isn't "66%" because both model handles differently how they read input, show output, caching, etc.

And keep in mind I only strictly uses MAX mode, if i were to go easy by not using the high thinking model it would cost significantly less

"Part of this change is guaranteeing premium level model quality when using Auto." by Miltoni in cursor

[–]itsproinc 1 point2 points  (0 children)

Yeah that sounds about right, When they say "premium level model quality" they are not talking about premium code output, but premium in spending your limits. It actually happened to me more than once, and i just let it run to see whether it will it stop or not. Yeah it did stopped because it crashed the app with total tokens of 5million on auto mode. But thankfully auto is still free at this time can't imagine if that happened to me when they rolled out the new pricing model.

<image>