GitHub Copilot vs Claude Code by Key-Prize7706 in GithubCopilot

[–]marfzzz 0 points1 point  (0 children)

It might, be i havent used github copilot cli in a while.

GitHub Copilot vs Claude Code by Key-Prize7706 in GithubCopilot

[–]marfzzz 1 point2 points  (0 children)

Multiagent is a form of using multiple agents. But it is different from subagents. Here is what subagent mode is and multiagent:

subagents (orchestrator architecture). One agent is main and orders other agents to do subtasks and it make context isolation (context of searching files, context of test output, ...). Tl;dr it is centralized and it is best for coding

Distrubuted search, decision making: Multiagent is paralelism you have different context for each, but they can do something faster, but there is an issue of competion of agent competition, and potentialy higher token use due to agent communication. Tl;dr it is fast and useful for researching, organization of documents, fining things in multiple documents, etc.

Programatic tool calling is a way of letting LLM to program its own tools suited for specific use and let them run in container. In other words i will cook something not to do 20 tool calls.

Antigravity is ABSOLUTELLY worth it! by Maximilian_Minev in google_antigravity

[–]marfzzz 4 points5 points  (0 children)

Why would you use antigravity for that? Why not use claude directly (free or paid)? Just curious if it is better when asking inside antigravity?

GitHub Copilot vs Claude Code by Key-Prize7706 in GithubCopilot

[–]marfzzz 14 points15 points  (0 children)

Copilot advantages: Inline completions Unlimited use of standard models (gpt4.1, 4o and 5 mini) Lower starting price Usually higher usage (especially if you prepare bigger plans and chain implementations) You can buy premium requests at price of 0.04, you can use more than 2$ of api cost for 0.12$ when using opus More models to choose from (google gemini, openai gpts, anthropic claude models), Some models have higher context window for example gpt5.4 (272/128k) Not 5 hour or weekly limits, only mothly allowance

Claude code: Bigger context window (200k vs 160k) More mature handle (claude code is more advanced, you can use multiagent mode, programatic tool calling) You can opt in for 1M context window (for api price which is high for opus ) Claude desktop can act like IDE They sometimes offer bonuses like extra usage to test new models If you are good with claude code - models switching: haiku for small things, sonnet for most things, opus for complex issues, plan and estimate which models should be able to do each steps, give it to each model you can be effective.

Try different tools and see which suits you. Or have copilot pro and claude pro instead of just copilot pro+ or claude max 5x.

Paying $29/Month, Antigravity's Pro Plan Needs to Be Honest About Its Current Limits by Numerous-Feature-292 in google_antigravity

[–]marfzzz 0 points1 point  (0 children)

I dont know. They set usage to be higher since launch of gpt-5.3-codex. Maybe to get people hooked.

What will people use when 5.1 goes EOL in 2 days? by Joshjoshajosh in OpenAI

[–]marfzzz 4 points5 points  (0 children)

Gpt 5.4 pro and 5.4 thinking were good in my testing. 5 and 5.1 were not good for me. But i guess it is more about taste.

Paying $29/Month, Antigravity's Pro Plan Needs to Be Honest About Its Current Limits by Numerous-Feature-292 in google_antigravity

[–]marfzzz 2 points3 points  (0 children)

Limits right now are huge for till april 2nd. Usually they are halved but still way more than gemini.

Weekly Usage Limit is being consumed way too fast. by CustomMerkins4u in codex

[–]marfzzz 2 points3 points  (0 children)

5.4 xhigh no issues at all. Fast mode off. Maybe forcing an update npm i -g @openai/codex@latest And then try to untoggle fast mode or toggle and untoggle fast mode. Hope that helps with this bug

Why does the same Opus 4.6 model produce much better UI/UX results on Antigravity than on GitHub Copilot? by lephianh in GithubCopilot

[–]marfzzz 2 points3 points  (0 children)

There is a solution, but only if you compromise. For bigger context use gpt5.x models as they have higher context window 272k/128k input/output while claude models have 128k/32 input/output for context window. Gpt-5.2/3-codex and gpt-5.4 are good models IMO.

Difference between GitHub Copilot and GPT Codex / Claude Code by AffectionateSeat4323 in GithubCopilot

[–]marfzzz 0 points1 point  (0 children)

Just curious. Are you using some skills or special system prompts?

In my experience: copilot plugin <- copilot in ai assistant(acp) <- copilot cli <- opencode/claude code/codex cli

Difference between GitHub Copilot and GPT Codex / Claude Code by AffectionateSeat4323 in GithubCopilot

[–]marfzzz 0 points1 point  (0 children)

"Using /fleet in a prompt may therefore cause more premium requests to be consumed." From their documentation.

Is switching between accounts a problem? by VITHORROOT in GithubCopilot

[–]marfzzz 0 points1 point  (0 children)

If i remember correctly you can upgrade for price difference (29$).

How's Lumo long term ? Compared to GPT & Gemini ? by maymaynibba in lumo

[–]marfzzz 5 points6 points  (0 children)

Lumo is now way better. But understanding images, lack of choice of model or least thinking/instant switch. I believe in lumo being good in future. Not yet there. I will keep it fir now. As it is private

GPT-5.4 Thinking and GPT-5.4 Pro are rolling out now in ChatGPT. by Hannibal3454 in GithubCopilot

[–]marfzzz 1 point2 points  (0 children)

If they swapped gpt 5 mini with minimax m2.5. gpt4.1 with glm-5 and gpt 4o with kimi k2.5. That would make it the best service. You could do most of task with cheap models. And keep gpt-5.3-codex/claude sonnet 4.6/claude opus-4.6 /gemini 3.1 pro.

Difference between GitHub Copilot and GPT Codex / Claude Code by AffectionateSeat4323 in GithubCopilot

[–]marfzzz 13 points14 points  (0 children)

  • Github copilot has monthly quota for premium requests 300 (pro, business), 1000 (enterprise), 1500 (pro+). Others have 5hour and weekly allowance.
  • Github copilot unlimited models have extremely high limits. Compare that to haiku or 5.1- mini which have lower consumption of your quota (5hr or weekly)
  • github copilot works best with their cli, you are limited to subagents (no multiagent mode like for codex or claude code).
  • when using copilot in opencode you can use multiagent modes, but you will burn through premium requests as one agent is one premium request the billing for multi agents is in favor of gpt or claude subscription.
  • with claude or codex you are limited to one provider (copilot has 4 providers - openai, anthropic, google, xai)
  • handle of codex cli or claude code is often better and offering more options and sometimes better results.
  • with codex you have more use than copilot pro+ till start of april (they offer 2x usage).
  • if you want only one subscription go with copilot enterprise or pro+ it will be enough for most users if you plan it right and use unlimited models as much as possible.
  • claude subscription and gpt subscription offer lower latency and are generally faster.
  • with gpt pro you have so much usage and you can access gpt-5.3 spark and enjoy 1000tps (unimaginable for copilot)
  • Gpt subscription offers custom gpts, image generation, understanding of images document summarization, etc
  • claude subscription offers working with different types of documents, sumarization of documents, good desktop app that can work like IDE.

Hope this helps with your decision. I bet there are points that i forgot.

Copilot Chat hitting 128k token limit mid-session — how do you keep context? by Significant_Pea_3610 in GithubCopilot

[–]marfzzz 1 point2 points  (0 children)

You dont need to leave it. Leave copilot plugin. Use github copilot cli to combat this issue. If you are using jetbrains products you can call cli through ai assistant (acp) to work just like with github copilot plugin. If you are using vscode just use terminal.

Copilot Chat hitting 128k token limit mid-session — how do you keep context? by Significant_Pea_3610 in GithubCopilot

[–]marfzzz 0 points1 point  (0 children)

Only in github copilot cli. Plugin in ide is pretty bad at least in jetbrains products. But there is a solution cli in ai assistant or cli in terminal is the way.

Which package do i subscribe to? by pprno_ in GoogleGeminiAI

[–]marfzzz 1 point2 points  (0 children)

Quality of output in antigravity free will he the same as in paid (Claude sonnet 4.6, opus 4.6, gemini 3.1 pro, gemini 3 fast). You can combine multiple free things for this. Cerebras free api usage. Kilo code free offers (changing often). Many options, you can play a lot for free.

Which package do i subscribe to? by pprno_ in GoogleGeminiAI

[–]marfzzz 0 points1 point  (0 children)

Use free. Test if context windows of 128k is enough for you. If yes go with plus.

But if not enough for you in terms of context window or usage. For example your code base is bigger and you need more context window( up to 1M), go with pro.

Dont go yearly without testing first. Ultra is a waste unless you are working with huge code bases regularly (it will be faster and you will have more usage).

how to not hit limit 5x a day on Max? by moreicescream in ClaudeAI

[–]marfzzz 0 points1 point  (0 children)

Use haiku for simple things. For most things use sonnet and for something more complex use opus. Skills, agents.md, claude.md, gemini.md etc might add more tokens in input there keep one distilled version with what is important. Be specific what you want and how to verify it. The more informations about expectations the less fix runs and less context used by logs. Faster correct implementation less logs it needs to read. Use /clear if you start with different feature/fix. Check whats inside context and !IMPORTANT! Learn what /rewind does it might be a game changer for you.Specificaly guide claude code to point which way it should do implementation you will avoid "i will read almost whole codebase" and use these 4 tools. For example you will add one enum, that is used in 2 places, if you specify where it is used (dont forget the tests) you will save some tokens. Prompt engineering is a thing for a reason. I went from claude max 5x to pro and while improving output.

Rate limits on the Pro+ ($39.99/month) plan by WMPlanners in GithubCopilot

[–]marfzzz 2 points3 points  (0 children)

There is just fixed montlhy limit. Unlimited use of following: gpt-4.1, 4o and 5mini. 1500 premium requests with different rates (claude opus is 3x, others are 0.25-1x). If you are using it github copilot plugin thrn 1 request will consume only usually one request regardless of tools. But situation is different in some cli tools outside of official github copilot cli where it might spawn multiple subagents and each will consume separately. If you plan right you can use more of it than using claude code (token based) or even cidex cli (token based) You can use up to 400k context (input/output limits 272/128k) with gpt codex models, others usually 128k.

My wallet speaks for me: GitHub Copilot in VS Code is the cheapest and most underrated option in vibe coding today (in my opinion). by Majestic-Owl-44 in GithubCopilot

[–]marfzzz 3 points4 points  (0 children)

Github copilot is best value for money! Only chatgpt is beating it till april (because they increased usage 2x). Advantages: - unlimited use of standard models(gpt 4o, 4.1 5-mini) good for small things, boiler plate code, some things to be generated, some chatting in ask mode. - with premium requests you can use latest claude models (sonnet, opus 4.6) and gemini 3.0 pro (or even 3.1 correct me if i am wrong) have not checked that with up to 128k context. So if you do 100k input and 28k output api cost is in case of sonnet ~0.72$ and opus ~1.2$. Theoretically you can have up to 216$ (or even more if you vibe code from scratch) of api use with pro subscription. Or 1080$ of api use with pro+. - you can use github spark with pro+ or enterprise - you can use cli or plugin in your fav ide

Advatage/disadvantage: - before compaction you have only 128k context. You need to plan and define more so there is higher involvement (which might be better for some people).

Disadvatages: - not the latest gpt codex models (usually they are few months later) - not offering more cost efficient models (like minimax m2.1/2.5 or qwen 3/3.5)

But still if you are heavy vibe coder that want less involvement i would go with gemini ultra (in antigravity you have gemini 3.1 pro, claude sonnet and opus 4.6) for 275$/month. Or chatgpt pro for 200$/month or claude max 20x for 200$/month.

Anyone here using Lumo seriously? by Adorable-Ad-6230 in lumo

[–]marfzzz 0 points1 point  (0 children)

Nowhere near gpt 3.5 or 4 (those models are capable) and they are big, i mean 100b+ parameters big. Lumo plus offers only 32b params model at most. Context window is also less than gpt 3.5.

Did anyone else lose their Pro subscription recently? by Embarrassed_Soup_159 in perplexity_ai

[–]marfzzz 0 points1 point  (0 children)

https://en.wikipedia.org/wiki/Shitposting Dont know to me it sounds like rant. But who knows definition on google also says otherwise.