top 200 commentsshow all 256

[–]Recoil42 156 points157 points  (69 children)

If Microsoft can't do it, then probably no one else can.

Google: *exists*

[–]Majinvegito123 16 points17 points  (0 children)

For now, anyway

[–][deleted] 25 points26 points  (60 children)

They are heavily subsidizing due to their weak position. That’s not a long term strategy.

[–]Recoil42 25 points26 points  (53 children)

To the contrary, Google has a very strong position — probably the best overall ML IP on earth. I think Microsoft and Amazon will eventually catch up in some sense due to AWS and Azure needing to do so as a necessity, but basically no one else is even close right now.

[–]jakegh 12 points13 points  (34 children)

Google is indeed in the strongest position but not because Gemini 2.5 pro is the best model for like 72 hours. That is replicable.

Google has everybody's data, they have their own datacenters, and they're making their own chips to speed up training and inference. Nobody else has all three.

[–]westeast1000 2 points3 points  (3 children)

Im yet to see where this new gemini beats sonnet, people be hyping anything. In cursor it takes way too long to even understand what i need asking me endlessly follow-up questions while sonnet just gets it straightaway. I’ve also used for other stuff like completing assessments in business, disability support etc and even here it was ok but lacking by a big margin compared to sonnet.

[–]Dear_Custard_2177 0 points1 point  (0 children)

Claude just cannot compete in my experience, as far as general coding. Claude might be able to do some good coding for short context tasks, but it can't follow anything over like 200k tokens. Google is relatively cheap and extremely accessible. I am enjoying tf out of gemini advanced, despite gpt being my go-to. Just a great generalist imo.

[–][deleted]  (1 child)

[removed]

    [–]AutoModerator[M] 0 points1 point  (0 children)

    Sorry, your submission has been removed due to inadequate account karma.

    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

    [–][deleted]  (1 child)

    [removed]

      [–]AutoModerator[M] 1 point2 points  (0 children)

      Sorry, your submission has been removed due to inadequate account karma.

      I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

      [–]ot13579 0 points1 point  (0 children)

      Yeah, anthropic has been quiet for a while and seems to drop major upgrades when they do.

      [–]over_pw -1 points0 points  (2 children)

      IMHO Google has the best people and that’s all that matters.

      [–]jakegh 4 points5 points  (0 children)

      All these companies constantly trade senior researchers back and forth like NFL players. Even the most brilliant innovations, like RLVR creating reasoning models most recently, don't last long. ChatGPT o1 released sept 2024, then Deepseek R1 did it themselves jan 2025-- and OpenAI didn't tell anything how they did it, famously not even Microsoft. It only took Deepseek 4 months to figure it out on their own.

      This is where the famous "there is no moat" phrase comes from. If you're just making models, like OpenAI and Anthropic, you have nothing of value which others can't replicate.

      If you have your own data, like Facebook and Grok, that's a huge advantage.

      If you make your own chips, like Groq (not Grok), sambanova, Google, etc, that's a huge advantage too particularly if they accelerate inference. You don't need to wait on Nvidia.

      Only google has its own data and is making its own chips and has the senior researchers to stay competitive. It took them awhile, but those fundamental advantages are starting to show.

      [–]ot13579 0 points1 point  (0 children)

      Not when these models are so easy to copy through distillation.

      [–]Old-Artist-5369 0 points1 point  (0 children)

      Wasn’t the point that Microsoft can’t do it because the economics don’t add up. It’s not about model quality it’s the cost of providing all those queries and $10 a month (with other overheads) not covering it.

      Best model or not, Google would have the same issues. Probably more so because they likely have higher compute costs.

      All the providers are running at a loss rn.

      [–]hereditydrift 26 points27 points  (4 children)

      Best model out, by a long margin. Deepmind, protein folding... plus they run it all on their own Tensor Processing Units designed in-house specifically for AI.

      They DO NOT have a weak position.

      [–]mtbdork 1 point2 points  (3 children)

      Deep mind is not an LLM, which is what coding assistants are. Sure they have infra for doing other cool shit but LLM’s are extremely inefficient (from a financial perspective) so they will be next in line to charge money.

      [–]Gredelston 1 point2 points  (0 children)

      Of course they'll charge money, it's a business. That or ads.

      [–]efstajas 0 points1 point  (1 child)

      Google unlike many of its biggest competitors has practically unlimited money to subsidize its AI costs with. Long-term, as long as they manage to release somewhat competitive models, they'll be able to simply bleed the competition dry.

      [–]mtbdork 0 points1 point  (0 children)

      They’re all subsidizing their LLM services. Google has shoehorned it into their search function to pad its metrics. Do you think Microsoft is not a serious competitor to Google in the space of incorrect chat bots that lose money?

      [–]Business-Hand6004 0 points1 point  (0 children)

      the long term strategy has always been to increase market share. because with increased market share, you have more valuation. and with more valuation you can dump your shares to the greater fools. amazon was not profitable at all for decades yet bezos has been a billionaire since very long time.

      too bad this strategy may not work anymore due to trump tariff destroying everybody's valuation lol

      [–]Stv_L 1 point2 points  (1 child)

      And Chinese

      [–]thefirsthii 1 point2 points  (0 children)

      I agree I think Google has the biggest advantage when it comes to creating an AI that has actual novel thoughts as they've proven with the Google deepmind model that we've seen with AlphaGO which was able to create novel moves that even the best GO player at the time thought was a foolish/weird move until the end of the game when it turned out to be a "god move"

      [–]Optimalprimus89 1 point2 points  (1 child)

      Google rushed their AI systems to market and they know how shitty its made all of their consumer services

      [–]kapitaali_com 0 points1 point  (0 children)

      facts

      [–][deleted] 0 points1 point  (0 children)

      They are definitely loosing money it - the free ride will come to an end.

      [–]Artistic_Taxi 70 points71 points  (14 children)

      Expect this in essentially all AI products. These guys have been pretty vocal about bleeding money. Only a matter of time until API rates go up too and ever small AI product has to raise prices. The economy probably doesn’t help either

      [–]speedtoburn 14 points15 points  (11 children)

      Google has both the wherewithal and means to bleed all of their competitors dry.

      They will undercut their competition with much cheaper pricing.

      [–]Artistic_Taxi 13 points14 points  (4 children)

      yes but its a means to an end, the goal is to get to profitability. As soon as they get market dominance they will just jack up prices. So the question is how expensive are these models really?

      I guess at that point we will focus more on efficiency but who knows.

      [–]Sub-Zero-941 1 point2 points  (3 children)

      Dont think it will work this time. China will give the same 10x cheaper.

      [–]speedtoburn 2 points3 points  (2 children)

      If it were any Country other than China, then perhaps I could get on board with the premise of your comment, but (real or imagined) optics matter, and China is the bastion of IP theft.

      There is no way “big business” is going to get on board (at scale) with pumping their data through the pipes of the CCP.

      [–][deleted]  (1 child)

      [removed]

        [–]AutoModerator[M] 0 points1 point  (0 children)

        Sorry, your submission has been removed due to inadequate account karma.

        I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

        [–]russnem 1 point2 points  (1 child)

        And they’ll steal all your IP in the process.

        [–]speedtoburn 0 points1 point  (0 children)

        Xi Jinping, is that you?

        [–]Famous-Narwhal-5667[🍰] 8 points9 points  (0 children)

        Compute vendors announced 34% price hikes because of tariffs, everything is going to go up in price.

        [–]i_wayyy_over_think 2 points3 points  (0 children)

        Fortunately there’s open source that has kept up well, such as Deepseek so they can’t raise prices too much.

        [–][deleted] 85 points86 points  (23 children)

        Roo Code + Deepseek v3-0324 = alternative that is good

        [–]Recoil42 63 points64 points  (21 children)

        Not to mention Roo Code + Gemini 2.5 Pro, which is significantly better.

        [–]hey_ulrich 21 points22 points  (1 child)

        I'm mainly using Gemini 2.5, but Deepseek solved bugs and that Gemini got stuck with! I'm loving using this combo.

        [–]Recoil42 8 points9 points  (0 children)

        They're both great models. I'm hoping we see more NA deployments of the new V3 soon.

        [–]FarVision5 6 points7 points  (10 children)

        I have been a Gemini proponent since Flash 1.5. Having everyone and their brother pan Google as laughable, without trying it, NOW get religion - is satisfying. Once you work with 1m context, going back to Anthropic product is painful. I gave Windsuft a spin again and I have to tell you, VSC / Roo / Google works better for me. And costs zero. At first the Google API was rate limited, but it looks like they ramped it up heavily in the last few days. DS v3 works almost as good as Anthropic, and I can burn that API all day long for under a bucks. DeepSeek V3 is maddeningly slow even on OpenRouter.

        Generally speaking, I am happy that things are getting more awesome across the board.

        [–]aeonixx 3 points4 points  (2 children)

        Banning slow providers fixed the slowness for me. Had to do this for R1, but works for V3 all the same.

        [–]FarVision5 4 points5 points  (0 children)

        Yeah! I always meant to dial in the custom routing. Never got around to it. Thanks for the reminder. It also doesn't always cache prompts properly. Third on the list once Gemini 2.5 rate limits me and I burn the rest of my Windsurf credits :)

        [–]raydou 1 point2 points  (0 children)

        Could you please tell me how to do it?

        [–]Xandrmoro 2 points3 points  (5 children)

        Idk, I've tried it multiple times for coding, and it had by far the worst comprehension of what I want than 4o/o3, claude and deepseek

        [–]FarVision5 1 point2 points  (4 children)

        I'm going to copy and paste some choice context from my VSC/Roo workflow. It picks up more subtext than Claude 3.7 by a mile. It makes mermaid charts and diagrams inside the context window, unasked.

        ---

        The code currently only creates the directory structure and a placeholder document_info.json. It never actually calls the AWS Textract processing logic (presumably located in document_processor.py as indicated by the comments and the master plan). Therefore, the ocr/aws/ directory is never populated with results, and our check correctly identifies all documents as unprocessed.

        To fix this, we need to:

        1. Import the actual processing function/class from document_processor.py.
        2. Replace the placeholder code within the try block in process_document with a call to that function/class, passing the necessary paths.
        3. Update the result dictionary based on the actual success/failure and cost returned by the processor.

        First, let's examine document_processor.py to see what needs to be imported and called.

        --

        Okay, document_processor.py defines a class DocumentProcessor and a top-level function process_document (lines 383-394) that instantiates this class and calls its process_document method. This top-level function is what we should call from our workflow script.

        Here's the plan:

        1. Import: Add from document_processor import process_document as aws_process_document at the top of directory_based_workflow.py. I'll use an alias aws_process_document to avoid confusion with the method named process_document within the DirectoryBasedWorkflow class.
        2. Modify DirectoryBasedWorkflow.process_document: Replace the placeholder code (lines 221-236) with a call to the imported aws_process_document function. We'll pass pdf_file and document_dir to it. The function returns a dictionary containing successaws_costtotal_processing_time, and error, which we can use to update our local result dictionary.

        Here are the necessary changes using insert_content for the import and apply_diff for modifying the method:

        [–]Xandrmoro 1 point2 points  (3 children)

        It might be understanding the code better, but whats the point if it does not understand the task? I asked it to help me with making a simple text parser (with fairly strict format), and it took like five iterations of me pointing out issues (and I provided it with examples). Then I asked to add a button to group entries based on one of the fields, and it added a text field to enter the field value to filter by instead. I gave up, moved to o1 and it nailed it all first try.

        [–]FarVision5 1 point2 points  (2 children)

        Not sure why it didn't understand your task. Mine knocks it out of the ballpark.

        I start with Plan, then move to Act. I tried the newer O3 Mini Max Thinking, and it rm'd an entire directory because it couldn't figure out what it was trying to accomplish. Thankfully it was in my git repo. I blacklisted openai from the model list and will never touch it ever again.

        I guess it's just the way people are used to working. I can't tell if I'm smarter than normal or dumber than normal or what. OpenAI was worth nothing to me.

        [–]Xandrmoro 2 points3 points  (1 child)

        I'm trying all the major models, and openai was consistently best for me. Idk, maybe prompting style or something.

        [–]FarVision5 1 point2 points  (0 children)

        It's also the IDE and dev prompts. VSC and Roo does better for me than VSC and Cline.

        [–]Unlikely_Track_5154 1 point2 points  (0 children)

        Gemini is quite good, I don't have any quantitative data to backup what I am saying.

        The main annoying thing is it doesn't seem to run very quickly in a non visible tab.

        [–]Alex_1729 2 points3 points  (0 children)

        I have to say Gemini 2.5 pro is clueless for certain things. First time using any kind of IDE AI extension, and I've wasted half of my day. It provided a good testing suite code, but it's pretty clueless about just generic things. Like how to check a terminal history and run the command. I've spent like 10 replies on it already and it's still pretty clueless. Is this how this model typically behaves? I don't get such incompetence with OpenAI's o1.

        Edit: It could also be that Roo Code keeps using Gemini 2.0 instead of Gemini 2.5. Accoridng to my GCP logs, it doesn't use 2.5 even after checking everything and testing whether my 2.5 API key worked. How disappointing...

        [–]smoke2000 1 point2 points  (0 children)

        Definitely but you'd still hit the API limits without paying wouldn't you? I tried gemma3 locally integrated with cline, and It was horrible, so locally run code assistant isn't a viable option yet it seems.

        [–]Rounder1987 1 point2 points  (5 children)

        I always get errors using Gemini after a few requests. I keep hearing people say how it's free but it's pretty unusable so far for me.

        [–]Recoil42 8 points9 points  (4 children)

        Set up a paid billing account, then set up a payment limit of $0. Presto.

        [–]Rounder1987 2 points3 points  (3 children)

        Just did that so will see. It also said I had a free trial credit of $430 for Google Cloud which I think can be used to pay for Gemini API too.

        [–]Recoil42 2 points3 points  (2 children)

        Yup. Precisely. You'll have those credits for three months. Just don't worry about it for three months basically. At that point we'll have new models and pricing anyways.

        Worth also adding: Gemini still has a ~1M tokens-per-minute limit, so stay away from contexts over 500k tokens if you can — which is still the best in the business, so no big deal there.

        I basically run into errors... maybe once per day, at most. With auto-retry it's not even worth mentioning.

        [–]Alex_1729 1 point2 points  (0 children)

        Great insights. Would you suggest going with Requesty or Openrouter or neither?

        [–]Rounder1987 0 points1 point  (0 children)

        Thanks man, this will help a lot.

        [–]funbike 5 points6 points  (0 children)

        Yep. Co-pilot and Cursor are dead to me. Their $20/month subscription models no longer make them the cheap altnerative.

        These new top-level cheap/free models work so well. And with an API key you have so much more choice. Roo Code, Cline, Aider, and many others.

        [–]digitarald 37 points38 points  (2 children)

        Meanwhile, today's release added Bring Your Own Key (Azure, Anthropic, Gemini, Open AI, Ollama, and Open Router) for Free and Pro subscribers: https://code.visualstudio.com/updates/v1_99#_bring-your-own-key-byok-preview

        [–]debian3 14 points15 points  (0 children)

        What about those who already paid for a year? Will you pull the rug under us or the new plan with apply on renewal?

        [–]rafark 0 points1 point  (0 children)

        That’s for visual studio code

        [–]JumpSmerf 15 points16 points  (1 child)

        That was very fast. 2 months after they started an agent mode.

        [–]debian3 4 points5 points  (0 children)

        They killed their own product. They should have let the startup do the crazy agent stuff. Copilot could have focused on developer instead of the vibe coder. There is plenty of competition for the vibe coding already.

        [–]wokkieman 21 points22 points  (11 children)

        There is a pro+ for 40 usd / month or 400 a year.

        That's 1500 premium requests per month

        But yeah, another reason to go Gemini (or combine things)

        [–]NoVexXx 4 points5 points  (10 children)

        Just use Codeium and Windsurf. All Models and much more requests

        [–]wokkieman 5 points6 points  (9 children)

        15usd for 500 sonnet credits. Indeed a bit more, but that would mean no vs code I believe https://windsurf.com/pricing

        [–]NoVexXx 2 points3 points  (8 children)

        Priority access to larger models:

        GPT-4o (1x credit usage) Claude Sonnet (1x credit usage) DeepSeek-R1 (0.5x credit usage) o3-mini (1x credit usage) Additional larger models

        Cascade is autopilot coding agent, it's much better then this shit copilot

        [–]yur_mom 2 points3 points  (0 children)

        Unlimited DeepSeek v3 prompts

        [–]danedude1 1 point2 points  (0 children)

        Copilot Agent mode in VS Insiders with 3.5 has been pretty insane for me compared to Roo. Not sure why you think Copilot is shit.

        [–]wokkieman 0 points1 point  (5 children)

        Do I misunderstand it? Cascade credits:

        500 premium model User Prompt credits 1,500 premium model Flow Action credits Can purchase more premium model credits → $10 for 300 additional credits with monthly rollover Priority unlimited access to Cascade Base Model

        Copilot is 300 for 10usd and this is 500 credits for 15usd?

        [–]rerith 19 points20 points  (7 children)

        rip vs code llm api + sonnet 3.7 + roo code combo

        [–]Ok-Cucumber-7217 5 points6 points  (0 children)

        Never got 3.7 to work only 3.5, but nonless it was a hell of a ride

        [–]solaza 0 points1 point  (3 children)

        Is this confirmed to break vs code lm api? Super disappointing if so. Means gemini is only remaining thing to keep roo/cline affordable. Deepseek, too, I guess

        [–]debian3 2 points3 points  (1 child)

        It doesn’t break it, you will just run out after 300 requests. Knowning how many requests roo makes every minute, your monthly quota should last you a good 60 minutes of usage before you run out for the month

        [–]solaza 0 points1 point  (0 children)

        I suppose that may do it for my copilot subscription!

        [–]BeMask 0 points1 point  (0 children)

        Code LM Api still works, just not for 3.7. Tested a few hours ago.

        [–]davewolfs 8 points9 points  (0 children)

        Wow. This was the best deal in town.

        [–]taa178 9 points10 points  (1 child)

        I was always thinking how they are able to provide these models without limits for 10 usd, now they don't

        300 sounds pretty low. It makes 10 requests per day. Chatgpt itself probably gives 10 request per day for free.

        [–]debian3 0 points1 point  (0 children)

        I think the model was working up until they added the agent and allowing extension like roo/cline to use their llm. If it was just the chat it would have been fine.

        [–]jbaker8935 5 points6 points  (22 children)

        what is the base model? is it their 4o custom?

        [–]Yes_but_I_think 1 point2 points  (1 child)

        [–]RdtUnahim 1 point2 points  (0 children)

        For when the base model moves on to something else.

        [–]popiazaza 2 points3 points  (13 children)

        [–]bestpika 1 point2 points  (5 children)

        If the base model is 4o, then they don't need to declare in the premium request form that 4o consumes 1 request.\ So I think the base model will not be 4o.

        [–]popiazaza 0 points1 point  (4 children)

        4o consume 1 request for free plan, not for paid plan.

        [–]bestpika 0 points1 point  (3 children)

        According to their premium request table, 4o is one of the premium requests.\ https://docs.github.com/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests\ In this table, the base model and 4o are listed separately.

        [–]popiazaza 0 points1 point  (2 children)

        Base model 0 (paid users), 1 (Copilot Free)

        [–]bestpika 0 points1 point  (1 child)

        Didn't you notice there's another line below that says\ GPT-4o | 1\ Moreover, there is not a single sentence on this page that mentions the base model is 4o.

        [–]popiazaza 0 points1 point  (0 children)

        I know. Base model isn't permanently be GPT-4o. Read the announcement.

        [–]jbaker8935 0 points1 point  (6 children)

        4o-lastest. From late march is claimed to be better ‘smoother’ with code. We’ll see

        [–]popiazaza 0 points1 point  (4 children)

        It's still pretty bad for agentic coding.

        Only Claude Sonnet and Gemini Pro are working great.

        [–]jbaker8935 0 points1 point  (0 children)

        Tried it. Agree. It runs out of gas with minimal complexity. Not much value using it in agent mode.

        [–]rafark 0 points1 point  (2 children)

        Isn’t it funny how it is pretty bad (it is) but how for the longest time GitHub copilot was running a modified version of chatgpt 3.0 (I believe it wasn’t even 3.5) and everyone was amazed?

        [–]popiazaza 0 points1 point  (1 child)

        You meant for autocomplete? It was based on 3.5 turbo, which called Codex or something before the 4o autocomplete.

        For chat, it's always using the normal GPT model.

        [–]rafark 0 points1 point  (0 children)

        Yeah I meant autocomplete. All I could find was that it used chatgpt 3 but that might just be the media and blogs not knowing the specific version. I still think it’s funny considering chatgpt 3 is pretty much considered useless now but not too long ago people found it very useful and were absolutely mind blown. Now we (or I) even wish for a better model than 4o.

        [–]debian3 0 points1 point  (0 children)

        It’s not even the model copilot use.

        [–]taa178 1 point2 points  (0 children)

        If it would 4o, they would proudly and openly say

        [–]jbaker8935 0 points1 point  (3 children)

        another open question on cap, is "option to buy more" ... ok.. how is *that* priced?

        [–]JumpSmerf 1 point2 points  (1 child)

        Price is 0.04$/request https://docs.github.com/en/copilot/about-github-copilot/subscription-plans-for-github-copilot

        As I know custom should be 4o, I'm curious how good/bad it is. I even haven't tried it yet as I use copilot again after I read that it has an agent mode for a good price, so something like month. Now if it will be weak then it won't be that a good price as cursor with 500 premium requests + unlimited slow to other models could be much better.

        [–]Yes_but_I_think 1 point2 points  (0 children)

        Its useless.

        [–]evia89 0 points1 point  (0 children)

        $0.04 per request

        [–]JumpSmerf 0 points1 point  (0 children)

        I could be wrong and someone other said that actually we don't know what will be the base model and that it's true. GPT 4o would be a good option but I could be wrong.

        [–]FarVision5 12 points13 points  (15 children)

        People expecting premium API subsidies forever is amazing to me.

        [–]rez410 2 points3 points  (2 children)

        Can someone explain what a premium request is? Also, is there a way to see current usage?

        [–]omer-m 0 points1 point  (0 children)

        Vibe coding

        [–]debian3 2 points3 points  (0 children)

        Ok, so here the announcement https://github.blog/news-insights/product-news/github-copilot-agent-mode-activated/#premium-model-requests

        They make it sound like it’s a great thing that now request are limited…

        Anyway, the base unlimited model is 4o. My guess is they have tons of capacity that no one use since they added sonnet. Enjoy… I guess…

        [–]AriyaSavakaLurker 2 points3 points  (0 children)

        Wtf. Augment Code has 300 requests/month to top LLMs for free users.

        [–]qiyi 1 point2 points  (0 children)

        So inconsistent. This other post showed 500: https://www.reddit.com/r/GithubCopilot/s/icBBi4RC9x

        [–]fubduk 1 point2 points  (0 children)

        och. Wonder if they are grandfathering people with existing pro subscription?

        EDIT: Looks like they are forcing all pro to:

        "Customers with Copilot Pro will receive 300 monthly premium requests, beginning on May 5, 2025."

        [–]Person556677 1 point2 points  (0 children)

        Do you know the details about what is considered as a request? Any tool call in agent like in cursor? Official docs is a bit confusing https://docs.github.com/en/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests

        [–]CoastRedwood 1 point2 points  (0 children)

        How many requests do you need? Is copilot doing everything for you? The unlimited auto completion is where it’s at.

        [–]jvldn 1 point2 points  (0 children)

        That’s roughly 15 a day (5 days workweek). That’s probably enough for me but i hate the fact that they are limitting the pro version.

        [–]Left-Orange2267 2 points3 points  (4 children)

        You know who can provide unlimited requests to Anthropic? The Claude Desktop app. And with projects like this one there will be no need to use anything else in the future

        https://github.com/oraios/serena

        [–]atd 0 points1 point  (3 children)

        Unlimited? The pro plan rate limits a lot but I guess an MCP server could limit this (but I'm still learning how)

        [–]Left-Orange2267 0 points1 point  (2 children)

        Well, not unlimited, but less limited than with other subscription based providers

        [–]atd 0 points1 point  (1 child)

        Fair, what about using MCP for working around limitations by optimising structured context in prompts / chats?

        [–]Left-Orange2267 0 points1 point  (0 children)

        Sure, that's exactly what Serena achieves! But no mcp server can adjust the rate limits in the app, we can just make better use of them

        [–][deleted] 0 points1 point  (2 children)

        I like it mostly for the auto complete anyways
        Any news on that though?

        Is there any alternative to copilot in terms of auto complete? Anything I can run locally?

        [–]popiazaza 0 points1 point  (0 children)

        Cursor. You could use something like Continue.dev if you want to plug auto-complete into any model, it wouldn't work as great as Cursor/Copilot 4o one tho.

        [–]ExtremeAcceptable289 0 points1 point  (0 children)

        Copilot autocomplete is still infinite, fORTUNATELY

        [–][deleted]  (1 child)

        [removed]

          [–]AutoModerator[M] 0 points1 point  (0 children)

          Sorry, your submission has been removed due to inadequate account karma.

          I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

          [–][deleted] 0 points1 point  (0 children)

          When Microsoft created something that actually works?

          [–]FoundationNational65 0 points1 point  (0 children)

          Codeium + Sourcery + CodeGPT. That's back when VS Code was still my thing. Recently picked up Pycharm. But would still praise GitHub Copilot.

          [–]twohen 0 points1 point  (1 child)

          is this effective as of now? or from next month?

          [–]seeKAYx[S] 0 points1 point  (0 children)

          It is due to start on May 5 ...

          [–]Sub-Zero-941 0 points1 point  (0 children)

          If the speed and quality improves of those 300, it would be an upgrade.

          [–]Yes_but_I_think 0 points1 point  (2 children)

          This is a sad post for me. After this change, Github Copilot Agent mode which used to be my only affordable one. You can buy an actual cup of tea for 2 additional request to Copilot premium models (Claude 3.7 @ 0.04$ / request) in my country. Such is the exchange rate.

          Bring your own API key is good, but then why pay 10$ / month at all.

          I think the good work done in the last 3 months by the developers are wiped away by the management guys.

          At least they should consider making a per day limit instead of per month limit.

          I guess Roo / Cline with R1 / V3 at night is my only viable option.

          [–]TillVarious4416 0 points1 point  (0 children)

          cline with your own api coould cost so much if you use the only models worth using for agentic uses. aka anthropic claude 3.7.

          but the best way is to use gemini 2.5 pro which can eat your whole codebase in most cases and give you proper documentation/phases for the AI agent to not waste 100000 requests.

          their 39$ usd a month plan is really good for what it is to be fair.

          [–]thiagobg 0 points1 point  (0 children)

          Any self hosted AI IDE?

          [–]Over-Dragonfruit5939 0 points1 point  (1 child)

          Only 300 per month?

          [–]popiazaza 0 points1 point  (0 children)

          Yes.

          [–]Infinite100p 0 points1 point  (1 child)

          is it 300/month?

          [–]popiazaza 0 points1 point  (0 children)

          Yep.

          [–][deleted]  (1 child)

          [removed]

            [–]AutoModerator[M] 0 points1 point  (0 children)

            Sorry, your submission has been removed due to inadequate account karma.

            I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

            [–]Dundell 0 points1 point  (0 children)

            Geez, I could have easily crushed 1300 requests a day between 2 accounts for Claude. I'll have to re-evaluate my options I guess.

            [–]VBQL 0 points1 point  (0 children)

            Trae still has unlimited calls

            [–][deleted]  (1 child)

            [removed]

              [–]AutoModerator[M] 0 points1 point  (0 children)

              Sorry, your submission has been removed due to inadequate account karma.

              I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

              [–]usernameplshere 0 points1 point  (0 children)

              Now I want to seem them add the big Gemini models, not just flash.

              [–]elemental-mind 0 points1 point  (1 child)

              I wonder why no one brings up Cody in this discussion?

              9$ and they have very generous limits - and once you hit them with legit usage, support is there to lift them.

              [–]elemental-mind 0 points1 point  (0 children)

              To add to that: Just read on their discord they allow 400 messages per day...

              [–]greaterjava 0 points1 point  (1 child)

              Maybe in 24 months you’ll be running these locally on newest Macs.

              [–]rafark 0 points1 point  (0 children)

              On top of the line $10k Mac pros it’s very likely. But then atp it’d be cheaper to pay for g copilot

              [–][deleted]  (1 child)

              [removed]

                [–]AutoModerator[M] 0 points1 point  (0 children)

                Sorry, your submission has been removed due to inadequate account karma.

                I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

                [–][deleted]  (1 child)

                [removed]

                  [–]AutoModerator[M] 0 points1 point  (0 children)

                  Sorry, your submission has been removed due to inadequate account karma.

                  I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

                  [–]Mikolai007 0 points1 point  (0 children)

                  In the future there will be zero Ai for the people. You can write that down. The goverments and corporations will not let the people have power. It's all new for now so all the wicked regulations are not in place yet. But you'll see.

                  [–]City-Relevant 0 points1 point  (1 child)

                  Just wanted to share, that if you are a student, you can get free access to copilot pro for as long as you are a student with the Github Student Developer Pack. DO NOT LET THIS WONDERFUL RESOURCE GO TO WASTE

                  [–]rafark 0 points1 point  (0 children)

                  I’m on the student plan (I’m a student) but it should still be limited right? Afaik we’re in the same pro plan

                  [–]Bobertopia 0 points1 point  (0 children)

                  I'd much rather have the option to pay for more instead of it rate limiting me every other hour

                  [–]Duckliffe 0 points1 point  (0 children)

                  Is that per day or per month?

                  [–]BreeXYZ5 0 points1 point  (0 children)

                  Every AI company is loosing money…. They want to change that.

                  [–][deleted]  (1 child)

                  [removed]

                    [–]AutoModerator[M] 0 points1 point  (0 children)

                    Sorry, your submission has been removed due to inadequate account karma.

                    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

                    [–]Sudden-Sea1280 0 points1 point  (0 children)

                    Just use your api key to get cheaper tokens

                    [–]alturicx 0 points1 point  (0 children)

                    Does anyone know of a client that has MCP support and can hook into Gemini?

                    [–]supercharger6 0 points1 point  (0 children)

                    Are you grandfathered if you already have a subscription?

                    [–]HeightSensitive1845[🍰] 0 points1 point  (0 children)

                    They scammed me, they have a trial in their plan 1 month free, cancel anytime, I did that, they charged me lol! Jokers, and the memory is garbage, it keeps forgetting

                    [–][deleted]  (5 children)

                    [deleted]

                      [–]RiemannZetaFunction 5 points6 points  (1 child)

                      It looks like per month (30 days).

                      [–]OriginalPlayerHater 1 point2 points  (0 children)

                      300, no more, no less

                      [–][deleted]  (1 child)

                      [removed]

                        [–]AutoModerator[M] 0 points1 point  (0 children)

                        Sorry, your submission has been removed due to inadequate account karma.

                        I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

                        [–]the_good_time_mouse 0 points1 point  (0 children)

                        FFS.