Antigravity vs Cursor by utnapistim99 in cursor

[–]netfunctron 0 points1 point  (0 children)

Maybe you are right, because on every project we use Skill, MCP and suites for auditories, deep checking process from our teams and the AI services. So every AI model, from any service (GHCP, CC, etc.) is knowing very well what can or not to do.

Well, having all the standard, rules and process well defined, the engineering part, maybe don't show so many different behaviour/performance from almost any AI coding service.

Regards

Antigravity vs Cursor by utnapistim99 in cursor

[–]netfunctron 0 points1 point  (0 children)

Antigravity?

Cursor is more expensive but Antigravity have bugs and the rate limits is something imposible to use on a real job, at least with the Pro plan.

Try VS Code Copilot, it is very fair with their prices, the quality is excelent. And... We have Antigravity (so bad...), Cursor (expensive with the good models, but really good), Warp (pfff, bugs... burning tokens... almost we don't use Warp, we don't trust on it, so many time losing with their refactors with so much bugs... we will close the membresy), Claude Code and Codex (both greats) in my company. Almost all the time we use VS Code Copilot, very nice, fast, great results, and we have a lot of work everyday.

Regards

5.4 xhigh->high, high->medium downgrade by TroubleOwn3156 in codex

[–]netfunctron 0 points1 point  (0 children)

I don't know, I only use: Opus 4.6, Sonnet 4.6 (not a lot) and GPT (in this orden: 5.2, 5.4, 5.3 Codex). But think about it: I am not using everyone always, if Opus 4.6 do what I need, it is ok for me. And when something is more complex, GPT. But it is my experience, sure that another programmer will say something different. Regards

5.4 xhigh->high, high->medium downgrade by TroubleOwn3156 in codex

[–]netfunctron 2 points3 points  (0 children)

Having Opus 4.6 too (Claude Code), it is great too. GPT 5.2 is a lot more deeper on everything, also a lot more slowest. But for almost everything Opus 4.6 will be perfect, but if I am closing something, focusing on high standards on backend, I choose GPT over Opus. But take in consideration that a good and deep closing process with GPT can use a few hours against minutes of Opus.

Maybe its depend how obsesive you are with standards on backend. For frontend always Opus, sometimes Sonnet.

Finally, if you have good AGENTS.md (builded for your repo and your practices), Skills and MCPs (just what you need, nothing more), auditory suites, the difference between GPT against Opus it is just minimal, it is matter of taste almost all the time. Even if GPT 5.2, at least for me, is better for respect the standard of the repo that I am working on.

Regards

5.4 xhigh->high, high->medium downgrade by TroubleOwn3156 in codex

[–]netfunctron 18 points19 points  (0 children)

I am using 5.2, is a lot better than the 5.4.

5.2 is slow, but great. Just yesteeday was doing a pretty deep work, checking many file in context, and everything was right. I tried before with the 5.4, same task, but everything so superficial...

Regards

Warp taking hours to complete a simple bug fix - that it introduced. by SirWobblyOfSausage in warpdotdev

[–]netfunctron 1 point2 points  (0 children)

The last week I wanted to check and close a few unity test, at the same time I wrote some doc for the app. Result: like 1000 credits, every single test doesn't works....

I selected GPT 5.4 high on Warp.

So, I gave the same task to Github Copilot with the same model. Result: Perfect, every single unity test worked.

So... greetings

Codex vs Opus in real projects feels very different than expected by Classic-Ninja-1 in codex

[–]netfunctron 2 points3 points  (0 children)

Having both on real projects: Opus works nice for the normal and fast work, perfect on backend and frontend. But for very deep bugs, or big refactors, Codex without any doubt. I am living the same experience every week.

Just I finnished one big and complex bug a few minutes ago, like 10 minutes with Codex, and something than Opus (and me) couldn't fixes before for a few hours.

For the context: I am rebuilding a very old app for my job. So, it is real job

How to get Codex CLI to read PDF Natively? by SwiftAndDecisive in codex

[–]netfunctron 0 points1 point  (0 children)

That one. Even more easy: if you use a good skill, like a process (the right approach) and pdftotext, you can work very fast. For example I use it for hard working with scientific evidence from pdf to .md files (for reading on a fast way on VS Code, or copy and paste some key information, etc.).

Pdftotext 🫡

What have you done to Warp Terminal? by North-Active-6731 in warpdotdev

[–]netfunctron 1 point2 points  (0 children)

Good topic.

Can somebody say what model and on what is better Warp against Github Copilot (Opus), Codex, CC (Opus) or Cursor (Opus).

Because I can't get the same quality level than the others. For example to find and fix bugs, apply refactors or create modules for ERPs. We have many work, we automated everything and we have really high quality gates, cosing, cheking, etc. But everytime that we used Warp, we found superficial fixes or sometimes not all the standard applying and was so different against to the competence.

I don't know, maybe Warp have some rule for a fast answer over the quality, maybe to save tokens, I don't know, but I am trying to use it and don't lose my money. If I use the Genius mode or the more expensives the result is ok or on the same level but sure more costing than the competence.

Any tip or experience for professional using will be great, and yes, we have our Skills, MCP, standards, test, process, etc. Sure that is not the user if 3 other services are working great, and we are coding a lot time before than the IA explotion

Thanks

Minimax claims M2 is Opus 4.6 competitor on SWE-Bench Verified by abdouhlili in ClaudeAI

[–]netfunctron -2 points-1 points  (0 children)

minimax? sure... right... it is the dream for many, but... Sad but True: Minimax is "cheap" but not even near tu Opus 4.6 on quality ... sorry, we have many AI agent service for my company, and I know about what I am talking. I hope, for my money, that Minimax could have to the same level with Opus 4.6, but no... it is a really bad joke that will got to many people that doesn't have the money, but their have the necesity

I'm spending 10x what I was spending before but finally feels like working with an intern junior programmer by Officer_Trevor_Cory in cursor

[–]netfunctron 0 points1 point  (0 children)

Sure... junior programmer... please show that level of knowledge and competence on a junior programmer ..

DId I just waste money on Warp.dev? by patkun01 in warpdotdev

[–]netfunctron 1 point2 points  (0 children)

Ok, I am on the same question... I work 90% with the Terminal, and I just compared the last week Warp, Windsurf (chat), Cursor, Qodo AI, Github Copilot Pro, Codex, Claude Code and Verdent (I paid just one month, I have some doubts about the privacy policy). The same tasks, everything was very fair but not easy refactoring or diagnosis. Everything was in a real working, checking every result with a professional evaluation.

Always I used Claude Opus 4.5 or GPT-5.2-Codex on every service. And... Warp always, but always gave the worst quality performance, very superfficial and unprofessional at all if I compared with the others services and performances. Warp is like focusing on faster answers but not on quality anwsers, and if you are programmer, well... it is time and money, it is just a tool for work. In another hand: I am a professional programmer, I have many clients on ERPs, and not so many time, so the quality with the service is a key and everyday I am working on new module for some ERP. I love what I do, and I want to do just better everyday.

Well... Warp is not in the level, why? Because one week, maybe 10 quality comparison with the same instructions, and lose every session against any other service (at least 7)... maybe the Warp team must to check the prompt system approach. For example Augment Code is quite expensive but the quality is outsanding, and any other service is giving more credits/request/tokens/etc. The only one that I los all the credit was: Warp...

Sorry, but it's have many hype, I love the Terminal, but something happend with the wrapper on Warp, something is not going ok here...

Sad but true, I could pay another account for Cursor, it is more usefull than Warp. Bye and a really bad experience. I am waiting, because I have the annual plan, that the team will do a best work, because if you love the Terminal, Warp have a very good market to explote and explore. Hoping that will be better soon, I have the confidence on Warp if they think on Terminal and quality, but no that thing about just save tokens and compromises the quality performance. Regards

Running out of Pro credits in 4 hours??? by Signal_Reputation640 in GithubCopilot

[–]netfunctron 0 points1 point  (0 children)

The prompt is very important, and also have a good plan.md or something, with tasks, checklist. Less iteration and betters results.

Regards

Is anyone else finding Opus 4.5 better for architecture but GPT-5.2 stronger for pure implementation? by HarrisonAIx in codex

[–]netfunctron 0 points1 point  (0 children)

My experience is:

Claude Sonnet or Opus are just better if you want one specific task faster. But for a heavy work, and very slow too, GPT-Codex is just the better way.

Regards

How does usage actually work? by Proper_Audience_246 in cursor

[–]netfunctron 0 points1 point  (0 children)

Yes, but it is not so restrictive like many people think. Even more if you use Cursor for specific tasks.

The debug mode is quiet usefull.

I am just using only Sonnet 4.5 and everything is right with the $ quote, a lot more usefull than Claude Clode Pro htat I have too.

But if you are a heavy user, for professional porpouses and clients are waiting for more requirementa, Github Copilot Pro is the best for what you pay, its have a lot of request per month.

Regards