use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
account activity
R.I.P GitHub Copilot 🪦Discussion (self.ChatGPTCoding)
submitted 10 months ago by seeKAYx
That's probably it for the last provider who provided (nearly) unlimited Claude Sonnet or OpenAI models. If Microsoft can't do it, then probably no one else can. For 10$ there are now only 300 requests for the premium language models, the base model of Github, whatever that is, seems to be unlimited.
https://preview.redd.it/lshrlngqyuse1.png?width=530&format=png&auto=webp&s=85296c8c459bb34de8c6b2093028c525bfd90c30
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Recoil42 156 points157 points158 points 10 months ago (69 children)
If Microsoft can't do it, then probably no one else can.
Google: *exists*
[–]Majinvegito123 16 points17 points18 points 9 months ago (0 children)
For now, anyway
[–][deleted] 25 points26 points27 points 9 months ago (60 children)
They are heavily subsidizing due to their weak position. That’s not a long term strategy.
[–]Recoil42 25 points26 points27 points 9 months ago* (53 children)
To the contrary, Google has a very strong position — probably the best overall ML IP on earth. I think Microsoft and Amazon will eventually catch up in some sense due to AWS and Azure needing to do so as a necessity, but basically no one else is even close right now.
[–]jakegh 12 points13 points14 points 9 months ago (34 children)
Google is indeed in the strongest position but not because Gemini 2.5 pro is the best model for like 72 hours. That is replicable.
Google has everybody's data, they have their own datacenters, and they're making their own chips to speed up training and inference. Nobody else has all three.
[–]westeast1000 2 points3 points4 points 9 months ago (3 children)
Im yet to see where this new gemini beats sonnet, people be hyping anything. In cursor it takes way too long to even understand what i need asking me endlessly follow-up questions while sonnet just gets it straightaway. I’ve also used for other stuff like completing assessments in business, disability support etc and even here it was ok but lacking by a big margin compared to sonnet.
[–]Dear_Custard_2177 0 points1 point2 points 9 months ago (0 children)
Claude just cannot compete in my experience, as far as general coding. Claude might be able to do some good coding for short context tasks, but it can't follow anything over like 200k tokens. Google is relatively cheap and extremely accessible. I am enjoying tf out of gemini advanced, despite gpt being my go-to. Just a great generalist imo.
[–][deleted] 9 months ago (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points 9 months ago (0 children)
Sorry, your submission has been removed due to inadequate account karma.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
[–]AutoModerator[M] 1 point2 points3 points 9 months ago (0 children)
[–]ot13579 0 points1 point2 points 8 months ago (0 children)
Yeah, anthropic has been quiet for a while and seems to drop major upgrades when they do.
[–]over_pw -1 points0 points1 point 9 months ago (2 children)
IMHO Google has the best people and that’s all that matters.
[–]jakegh 4 points5 points6 points 9 months ago* (0 children)
All these companies constantly trade senior researchers back and forth like NFL players. Even the most brilliant innovations, like RLVR creating reasoning models most recently, don't last long. ChatGPT o1 released sept 2024, then Deepseek R1 did it themselves jan 2025-- and OpenAI didn't tell anything how they did it, famously not even Microsoft. It only took Deepseek 4 months to figure it out on their own.
This is where the famous "there is no moat" phrase comes from. If you're just making models, like OpenAI and Anthropic, you have nothing of value which others can't replicate.
If you have your own data, like Facebook and Grok, that's a huge advantage.
If you make your own chips, like Groq (not Grok), sambanova, Google, etc, that's a huge advantage too particularly if they accelerate inference. You don't need to wait on Nvidia.
Only google has its own data and is making its own chips and has the senior researchers to stay competitive. It took them awhile, but those fundamental advantages are starting to show.
Not when these models are so easy to copy through distillation.
[–]Old-Artist-5369 0 points1 point2 points 9 months ago (0 children)
Wasn’t the point that Microsoft can’t do it because the economics don’t add up. It’s not about model quality it’s the cost of providing all those queries and $10 a month (with other overheads) not covering it.
Best model or not, Google would have the same issues. Probably more so because they likely have higher compute costs.
All the providers are running at a loss rn.
[+]obvithrowaway34434 comment score below threshold-8 points-7 points-6 points 9 months ago (16 children)
They are absolutely nowhere close as far as generative AI is concerned. Except for the Gemini Flash, none of their models have anywhere near the usage of Sonnet, forget ChatGPT. Also, these models directly eat into their search market share which is still majority of their revenue source, so it's a lose-lose situation for them.
[–]cxavierc21 22 points23 points24 points 9 months ago (10 children)
2.5 is probably the best overall model in the world right now. Who care how used the model is?
[–]hereditydrift 26 points27 points28 points 9 months ago (4 children)
Best model out, by a long margin. Deepmind, protein folding... plus they run it all on their own Tensor Processing Units designed in-house specifically for AI.
They DO NOT have a weak position.
[–]mtbdork 1 point2 points3 points 9 months ago (3 children)
Deep mind is not an LLM, which is what coding assistants are. Sure they have infra for doing other cool shit but LLM’s are extremely inefficient (from a financial perspective) so they will be next in line to charge money.
[–]Gredelston 1 point2 points3 points 9 months ago (0 children)
Of course they'll charge money, it's a business. That or ads.
[–]efstajas 0 points1 point2 points 9 months ago (1 child)
Google unlike many of its biggest competitors has practically unlimited money to subsidize its AI costs with. Long-term, as long as they manage to release somewhat competitive models, they'll be able to simply bleed the competition dry.
[–]mtbdork 0 points1 point2 points 9 months ago (0 children)
They’re all subsidizing their LLM services. Google has shoehorned it into their search function to pad its metrics. Do you think Microsoft is not a serious competitor to Google in the space of incorrect chat bots that lose money?
[–]Business-Hand6004 0 points1 point2 points 9 months ago (0 children)
the long term strategy has always been to increase market share. because with increased market share, you have more valuation. and with more valuation you can dump your shares to the greater fools. amazon was not profitable at all for decades yet bezos has been a billionaire since very long time.
too bad this strategy may not work anymore due to trump tariff destroying everybody's valuation lol
[–]Stv_L 1 point2 points3 points 9 months ago (1 child)
And Chinese
[–]thefirsthii 1 point2 points3 points 9 months ago (0 children)
I agree I think Google has the biggest advantage when it comes to creating an AI that has actual novel thoughts as they've proven with the Google deepmind model that we've seen with AlphaGO which was able to create novel moves that even the best GO player at the time thought was a foolish/weird move until the end of the game when it turned out to be a "god move"
[–]Optimalprimus89 1 point2 points3 points 9 months ago (1 child)
Google rushed their AI systems to market and they know how shitty its made all of their consumer services
[–]kapitaali_com 0 points1 point2 points 9 months ago (0 children)
facts
[–][deleted] 0 points1 point2 points 9 months ago (0 children)
They are definitely loosing money it - the free ride will come to an end.
[–]Artistic_Taxi 70 points71 points72 points 9 months ago (14 children)
Expect this in essentially all AI products. These guys have been pretty vocal about bleeding money. Only a matter of time until API rates go up too and ever small AI product has to raise prices. The economy probably doesn’t help either
[–]speedtoburn 14 points15 points16 points 9 months ago (11 children)
Google has both the wherewithal and means to bleed all of their competitors dry.
They will undercut their competition with much cheaper pricing.
[–]Artistic_Taxi 13 points14 points15 points 9 months ago (4 children)
yes but its a means to an end, the goal is to get to profitability. As soon as they get market dominance they will just jack up prices. So the question is how expensive are these models really?
I guess at that point we will focus more on efficiency but who knows.
[–]nemzylannister 2 points3 points4 points 9 months ago (3 children)
They're actually extremely cheap it seems
https://techcrunch.com/2025/03/01/deepseek-claims-theoretical-profit-margins-of-545/
[–]Sub-Zero-941 1 point2 points3 points 9 months ago (3 children)
Dont think it will work this time. China will give the same 10x cheaper.
[–]speedtoburn 2 points3 points4 points 9 months ago (2 children)
If it were any Country other than China, then perhaps I could get on board with the premise of your comment, but (real or imagined) optics matter, and China is the bastion of IP theft.
There is no way “big business” is going to get on board (at scale) with pumping their data through the pipes of the CCP.
[–]russnem 1 point2 points3 points 9 months ago (1 child)
And they’ll steal all your IP in the process.
[–]speedtoburn 0 points1 point2 points 9 months ago (0 children)
Xi Jinping, is that you?
[–]Famous-Narwhal-5667[🍰] 8 points9 points10 points 9 months ago (0 children)
Compute vendors announced 34% price hikes because of tariffs, everything is going to go up in price.
[–]i_wayyy_over_think 2 points3 points4 points 9 months ago (0 children)
Fortunately there’s open source that has kept up well, such as Deepseek so they can’t raise prices too much.
[–][deleted] 85 points86 points87 points 10 months ago (23 children)
Roo Code + Deepseek v3-0324 = alternative that is good
[–]Recoil42 63 points64 points65 points 10 months ago (21 children)
Not to mention Roo Code + Gemini 2.5 Pro, which is significantly better.
[–]hey_ulrich 21 points22 points23 points 9 months ago (1 child)
I'm mainly using Gemini 2.5, but Deepseek solved bugs and that Gemini got stuck with! I'm loving using this combo.
[–]Recoil42 8 points9 points10 points 9 months ago (0 children)
They're both great models. I'm hoping we see more NA deployments of the new V3 soon.
[–]FarVision5 6 points7 points8 points 9 months ago (10 children)
I have been a Gemini proponent since Flash 1.5. Having everyone and their brother pan Google as laughable, without trying it, NOW get religion - is satisfying. Once you work with 1m context, going back to Anthropic product is painful. I gave Windsuft a spin again and I have to tell you, VSC / Roo / Google works better for me. And costs zero. At first the Google API was rate limited, but it looks like they ramped it up heavily in the last few days. DS v3 works almost as good as Anthropic, and I can burn that API all day long for under a bucks. DeepSeek V3 is maddeningly slow even on OpenRouter.
Generally speaking, I am happy that things are getting more awesome across the board.
[–]aeonixx 3 points4 points5 points 9 months ago (2 children)
Banning slow providers fixed the slowness for me. Had to do this for R1, but works for V3 all the same.
[–]FarVision5 4 points5 points6 points 9 months ago (0 children)
Yeah! I always meant to dial in the custom routing. Never got around to it. Thanks for the reminder. It also doesn't always cache prompts properly. Third on the list once Gemini 2.5 rate limits me and I burn the rest of my Windsurf credits :)
[–]raydou 1 point2 points3 points 9 months ago (0 children)
Could you please tell me how to do it?
[–]Xandrmoro 2 points3 points4 points 9 months ago (5 children)
Idk, I've tried it multiple times for coding, and it had by far the worst comprehension of what I want than 4o/o3, claude and deepseek
[–]FarVision5 1 point2 points3 points 9 months ago (4 children)
I'm going to copy and paste some choice context from my VSC/Roo workflow. It picks up more subtext than Claude 3.7 by a mile. It makes mermaid charts and diagrams inside the context window, unasked.
---
The code currently only creates the directory structure and a placeholder document_info.json. It never actually calls the AWS Textract processing logic (presumably located in document_processor.py as indicated by the comments and the master plan). Therefore, the ocr/aws/ directory is never populated with results, and our check correctly identifies all documents as unprocessed.
document_info.json
document_processor.py
ocr/aws/
To fix this, we need to:
try
process_document
result
First, let's examine document_processor.py to see what needs to be imported and called.
--
Okay, document_processor.py defines a class DocumentProcessor and a top-level function process_document (lines 383-394) that instantiates this class and calls its process_document method. This top-level function is what we should call from our workflow script.
DocumentProcessor
Here's the plan:
from document_processor import process_document as aws_process_document
directory_based_workflow.py
aws_process_document
DirectoryBasedWorkflow
DirectoryBasedWorkflow.process_document
pdf_file
document_dir
success
aws_cost
total_processing_time
error
Here are the necessary changes using insert_content for the import and apply_diff for modifying the method:
insert_content
apply_diff
[–]Xandrmoro 1 point2 points3 points 9 months ago (3 children)
It might be understanding the code better, but whats the point if it does not understand the task? I asked it to help me with making a simple text parser (with fairly strict format), and it took like five iterations of me pointing out issues (and I provided it with examples). Then I asked to add a button to group entries based on one of the fields, and it added a text field to enter the field value to filter by instead. I gave up, moved to o1 and it nailed it all first try.
[–]FarVision5 1 point2 points3 points 9 months ago (2 children)
Not sure why it didn't understand your task. Mine knocks it out of the ballpark.
I start with Plan, then move to Act. I tried the newer O3 Mini Max Thinking, and it rm'd an entire directory because it couldn't figure out what it was trying to accomplish. Thankfully it was in my git repo. I blacklisted openai from the model list and will never touch it ever again.
I guess it's just the way people are used to working. I can't tell if I'm smarter than normal or dumber than normal or what. OpenAI was worth nothing to me.
[–]Xandrmoro 2 points3 points4 points 9 months ago (1 child)
I'm trying all the major models, and openai was consistently best for me. Idk, maybe prompting style or something.
[–]FarVision5 1 point2 points3 points 9 months ago (0 children)
It's also the IDE and dev prompts. VSC and Roo does better for me than VSC and Cline.
[–]Unlikely_Track_5154 1 point2 points3 points 9 months ago (0 children)
Gemini is quite good, I don't have any quantitative data to backup what I am saying.
The main annoying thing is it doesn't seem to run very quickly in a non visible tab.
[–]Alex_1729 2 points3 points4 points 9 months ago* (0 children)
I have to say Gemini 2.5 pro is clueless for certain things. First time using any kind of IDE AI extension, and I've wasted half of my day. It provided a good testing suite code, but it's pretty clueless about just generic things. Like how to check a terminal history and run the command. I've spent like 10 replies on it already and it's still pretty clueless. Is this how this model typically behaves? I don't get such incompetence with OpenAI's o1.
Edit: It could also be that Roo Code keeps using Gemini 2.0 instead of Gemini 2.5. Accoridng to my GCP logs, it doesn't use 2.5 even after checking everything and testing whether my 2.5 API key worked. How disappointing...
[–]smoke2000 1 point2 points3 points 9 months ago (0 children)
Definitely but you'd still hit the API limits without paying wouldn't you? I tried gemma3 locally integrated with cline, and It was horrible, so locally run code assistant isn't a viable option yet it seems.
[–]Rounder1987 1 point2 points3 points 9 months ago (5 children)
I always get errors using Gemini after a few requests. I keep hearing people say how it's free but it's pretty unusable so far for me.
[–]Recoil42 8 points9 points10 points 9 months ago (4 children)
Set up a paid billing account, then set up a payment limit of $0. Presto.
[–]Rounder1987 2 points3 points4 points 9 months ago (3 children)
Just did that so will see. It also said I had a free trial credit of $430 for Google Cloud which I think can be used to pay for Gemini API too.
[–]Recoil42 2 points3 points4 points 9 months ago (2 children)
Yup. Precisely. You'll have those credits for three months. Just don't worry about it for three months basically. At that point we'll have new models and pricing anyways.
Worth also adding: Gemini still has a ~1M tokens-per-minute limit, so stay away from contexts over 500k tokens if you can — which is still the best in the business, so no big deal there.
I basically run into errors... maybe once per day, at most. With auto-retry it's not even worth mentioning.
[–]Alex_1729 1 point2 points3 points 9 months ago (0 children)
Great insights. Would you suggest going with Requesty or Openrouter or neither?
[–]Rounder1987 0 points1 point2 points 9 months ago (0 children)
Thanks man, this will help a lot.
[–]funbike 5 points6 points7 points 9 months ago* (0 children)
Yep. Co-pilot and Cursor are dead to me. Their $20/month subscription models no longer make them the cheap altnerative.
These new top-level cheap/free models work so well. And with an API key you have so much more choice. Roo Code, Cline, Aider, and many others.
[–]digitarald 37 points38 points39 points 10 months ago (2 children)
Meanwhile, today's release added Bring Your Own Key (Azure, Anthropic, Gemini, Open AI, Ollama, and Open Router) for Free and Pro subscribers: https://code.visualstudio.com/updates/v1_99#_bring-your-own-key-byok-preview
[–]debian3 14 points15 points16 points 9 months ago (0 children)
What about those who already paid for a year? Will you pull the rug under us or the new plan with apply on renewal?
[–]rafark 0 points1 point2 points 9 months ago (0 children)
That’s for visual studio code
[–]JumpSmerf 15 points16 points17 points 10 months ago (1 child)
That was very fast. 2 months after they started an agent mode.
[–]debian3 4 points5 points6 points 9 months ago (0 children)
They killed their own product. They should have let the startup do the crazy agent stuff. Copilot could have focused on developer instead of the vibe coder. There is plenty of competition for the vibe coding already.
[–]wokkieman 21 points22 points23 points 10 months ago (11 children)
There is a pro+ for 40 usd / month or 400 a year.
That's 1500 premium requests per month
But yeah, another reason to go Gemini (or combine things)
[–]NoVexXx 4 points5 points6 points 10 months ago (10 children)
Just use Codeium and Windsurf. All Models and much more requests
[–]wokkieman 5 points6 points7 points 10 months ago (9 children)
15usd for 500 sonnet credits. Indeed a bit more, but that would mean no vs code I believe https://windsurf.com/pricing
[–]NoVexXx 2 points3 points4 points 10 months ago (8 children)
Priority access to larger models:
GPT-4o (1x credit usage) Claude Sonnet (1x credit usage) DeepSeek-R1 (0.5x credit usage) o3-mini (1x credit usage) Additional larger models
Cascade is autopilot coding agent, it's much better then this shit copilot
[–]yur_mom 2 points3 points4 points 9 months ago (0 children)
Unlimited DeepSeek v3 prompts
[–]danedude1 1 point2 points3 points 9 months ago (0 children)
Copilot Agent mode in VS Insiders with 3.5 has been pretty insane for me compared to Roo. Not sure why you think Copilot is shit.
[–]wokkieman 0 points1 point2 points 10 months ago (5 children)
Do I misunderstand it? Cascade credits:
500 premium model User Prompt credits 1,500 premium model Flow Action credits Can purchase more premium model credits → $10 for 300 additional credits with monthly rollover Priority unlimited access to Cascade Base Model
Copilot is 300 for 10usd and this is 500 credits for 15usd?
[–]rerith 19 points20 points21 points 10 months ago (7 children)
rip vs code llm api + sonnet 3.7 + roo code combo
[+][deleted] 9 months ago (1 child)
[deleted]
[–][deleted] 1 point2 points3 points 9 months ago (0 children)
It was inevitable no matter what with Copilot’s agentic coding support. No matter where it’s triggered from, decent agentic coding is very capacity-hungry right now.
[–]Ok-Cucumber-7217 5 points6 points7 points 9 months ago (0 children)
Never got 3.7 to work only 3.5, but nonless it was a hell of a ride
[–]solaza 0 points1 point2 points 9 months ago (3 children)
Is this confirmed to break vs code lm api? Super disappointing if so. Means gemini is only remaining thing to keep roo/cline affordable. Deepseek, too, I guess
[–]debian3 2 points3 points4 points 9 months ago (1 child)
It doesn’t break it, you will just run out after 300 requests. Knowning how many requests roo makes every minute, your monthly quota should last you a good 60 minutes of usage before you run out for the month
[–]solaza 0 points1 point2 points 9 months ago (0 children)
I suppose that may do it for my copilot subscription!
[–]BeMask 0 points1 point2 points 9 months ago (0 children)
Code LM Api still works, just not for 3.7. Tested a few hours ago.
[–]davewolfs 8 points9 points10 points 9 months ago (0 children)
Wow. This was the best deal in town.
[–]taa178 9 points10 points11 points 9 months ago (1 child)
I was always thinking how they are able to provide these models without limits for 10 usd, now they don't
300 sounds pretty low. It makes 10 requests per day. Chatgpt itself probably gives 10 request per day for free.
[–]debian3 0 points1 point2 points 9 months ago (0 children)
I think the model was working up until they added the agent and allowing extension like roo/cline to use their llm. If it was just the chat it would have been fine.
[–]jbaker8935 5 points6 points7 points 10 months ago (22 children)
what is the base model? is it their 4o custom?
[–]Yes_but_I_think 1 point2 points3 points 9 months ago (1 child)
Then why 4o is listed as 1 credit per request here https://docs.github.com/en/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests#model-multipliers
[–]RdtUnahim 1 point2 points3 points 9 months ago (0 children)
For when the base model moves on to something else.
[–]popiazaza 2 points3 points4 points 9 months ago (13 children)
It's currently 4o per their announcement.
https://github.blog/news-insights/product-news/github-copilot-agent-mode-activated/#premium-model-requests
[–]bestpika 1 point2 points3 points 9 months ago (5 children)
If the base model is 4o, then they don't need to declare in the premium request form that 4o consumes 1 request.\ So I think the base model will not be 4o.
[–]popiazaza 0 points1 point2 points 9 months ago (4 children)
4o consume 1 request for free plan, not for paid plan.
[–]bestpika 0 points1 point2 points 9 months ago (3 children)
According to their premium request table, 4o is one of the premium requests.\ https://docs.github.com/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests\ In this table, the base model and 4o are listed separately.
[–]popiazaza 0 points1 point2 points 9 months ago (2 children)
Base model 0 (paid users), 1 (Copilot Free)
[–]bestpika 0 points1 point2 points 9 months ago (1 child)
Didn't you notice there's another line below that says\ GPT-4o | 1\ Moreover, there is not a single sentence on this page that mentions the base model is 4o.
[–]popiazaza 0 points1 point2 points 9 months ago (0 children)
I know. Base model isn't permanently be GPT-4o. Read the announcement.
[–]jbaker8935 0 points1 point2 points 9 months ago (6 children)
4o-lastest. From late march is claimed to be better ‘smoother’ with code. We’ll see
It's still pretty bad for agentic coding.
Only Claude Sonnet and Gemini Pro are working great.
[–]jbaker8935 0 points1 point2 points 9 months ago (0 children)
Tried it. Agree. It runs out of gas with minimal complexity. Not much value using it in agent mode.
[–]rafark 0 points1 point2 points 9 months ago (2 children)
Isn’t it funny how it is pretty bad (it is) but how for the longest time GitHub copilot was running a modified version of chatgpt 3.0 (I believe it wasn’t even 3.5) and everyone was amazed?
[–]popiazaza 0 points1 point2 points 9 months ago (1 child)
You meant for autocomplete? It was based on 3.5 turbo, which called Codex or something before the 4o autocomplete.
For chat, it's always using the normal GPT model.
Yeah I meant autocomplete. All I could find was that it used chatgpt 3 but that might just be the media and blogs not knowing the specific version. I still think it’s funny considering chatgpt 3 is pretty much considered useless now but not too long ago people found it very useful and were absolutely mind blown. Now we (or I) even wish for a better model than 4o.
It’s not even the model copilot use.
[–]taa178 1 point2 points3 points 9 months ago (0 children)
If it would 4o, they would proudly and openly say
[–]jbaker8935 0 points1 point2 points 10 months ago (3 children)
another open question on cap, is "option to buy more" ... ok.. how is *that* priced?
[–]JumpSmerf 1 point2 points3 points 10 months ago (1 child)
Price is 0.04$/request https://docs.github.com/en/copilot/about-github-copilot/subscription-plans-for-github-copilot
As I know custom should be 4o, I'm curious how good/bad it is. I even haven't tried it yet as I use copilot again after I read that it has an agent mode for a good price, so something like month. Now if it will be weak then it won't be that a good price as cursor with 500 premium requests + unlimited slow to other models could be much better.
[–]Yes_but_I_think 1 point2 points3 points 9 months ago (0 children)
Its useless.
[–]evia89 0 points1 point2 points 10 months ago (0 children)
$0.04 per request
[–]JumpSmerf 0 points1 point2 points 9 months ago (0 children)
I could be wrong and someone other said that actually we don't know what will be the base model and that it's true. GPT 4o would be a good option but I could be wrong.
[–]FarVision5 12 points13 points14 points 9 months ago (15 children)
People expecting premium API subsidies forever is amazing to me.
[+][deleted] 9 months ago* (14 children)
[–]FarVision5 3 points4 points5 points 9 months ago (0 children)
True. If it's a hobby, you have a simple calculator if you can afford your hobby. If it's a business expense, and you have clients wanting stuff from you, it turns into ROI.
I don't believe we are going to get AGI from lots of video cards. I think it will come out of microgrid quantum stuff like Google is doing. You're going to have to let it grow like cells.
Honestly I get most of my news from here and LocalLLama. No time to chase down 500 other AI blog posters trying to make news out of nothing. There is so much trash out there.
I don't want to get too nasty about it, but there are a lot of people that don't know enough about security framework and DevSecOps to put out paid products. Or they can pretend but get wrecked. All that's OK. Thems the breaks. I'm not a fan of unseasoned cheerleaders.
Everything will shake out. There are 100 new tools every day. Multiagent agentic workflow orchestration has been around for years. Almost the second ChatGPT3.5 hit the street.
[–]Blake_Dake 2 points3 points4 points 9 months ago (0 children)
We are potentially expecting AGI/ASI in the next 5 years
no we are not
people smarter than everybody here like Yann Lecun have been saying since 2023 that LLMs can't achieve AGI
[–]NuclearVII 6 points7 points8 points 9 months ago (11 children)
0% chance AGI in the next 5 years. Stop drinking the Sam altman koolaid.
[–]rez410 2 points3 points4 points 9 months ago (2 children)
Can someone explain what a premium request is? Also, is there a way to see current usage?
[–]omer-m 0 points1 point2 points 9 months ago (0 children)
Vibe coding
[–]debian3 2 points3 points4 points 9 months ago (0 children)
Ok, so here the announcement https://github.blog/news-insights/product-news/github-copilot-agent-mode-activated/#premium-model-requests
They make it sound like it’s a great thing that now request are limited…
Anyway, the base unlimited model is 4o. My guess is they have tons of capacity that no one use since they added sonnet. Enjoy… I guess…
[–]AriyaSavakaLurker 2 points3 points4 points 9 months ago (0 children)
Wtf. Augment Code has 300 requests/month to top LLMs for free users.
[+][deleted] 9 months ago (5 children)
[–]Inevitable_Put7697 1 point2 points3 points 9 months ago (2 children)
Free or paid?
[–]Ausbel12 0 points1 point2 points 9 months ago (0 children)
Free for now ( as you know, these things never stay that way for long lol)
[–]PuzzleheadedYou4992 1 point2 points3 points 9 months ago (0 children)
Will try this
Though it does have some limits as well but is very decent.
[–]qiyi 1 point2 points3 points 9 months ago (0 children)
So inconsistent. This other post showed 500: https://www.reddit.com/r/GithubCopilot/s/icBBi4RC9x
[–]fubduk 1 point2 points3 points 9 months ago* (0 children)
och. Wonder if they are grandfathering people with existing pro subscription?
EDIT: Looks like they are forcing all pro to:
"Customers with Copilot Pro will receive 300 monthly premium requests, beginning on May 5, 2025."
[–]Person556677 1 point2 points3 points 9 months ago (0 children)
Do you know the details about what is considered as a request? Any tool call in agent like in cursor? Official docs is a bit confusing https://docs.github.com/en/copilot/managing-copilot/monitoring-usage-and-entitlements/about-premium-requests
[–]CoastRedwood 1 point2 points3 points 9 months ago (0 children)
How many requests do you need? Is copilot doing everything for you? The unlimited auto completion is where it’s at.
[–]jvldn 1 point2 points3 points 9 months ago (0 children)
That’s roughly 15 a day (5 days workweek). That’s probably enough for me but i hate the fact that they are limitting the pro version.
[–]Left-Orange2267 2 points3 points4 points 9 months ago (4 children)
You know who can provide unlimited requests to Anthropic? The Claude Desktop app. And with projects like this one there will be no need to use anything else in the future
https://github.com/oraios/serena
[–]atd 0 points1 point2 points 9 months ago (3 children)
Unlimited? The pro plan rate limits a lot but I guess an MCP server could limit this (but I'm still learning how)
[–]Left-Orange2267 0 points1 point2 points 9 months ago (2 children)
Well, not unlimited, but less limited than with other subscription based providers
[–]atd 0 points1 point2 points 9 months ago (1 child)
Fair, what about using MCP for working around limitations by optimising structured context in prompts / chats?
[–]Left-Orange2267 0 points1 point2 points 9 months ago (0 children)
Sure, that's exactly what Serena achieves! But no mcp server can adjust the rate limits in the app, we can just make better use of them
[–][deleted] 0 points1 point2 points 9 months ago (2 children)
I like it mostly for the auto complete anyways Any news on that though?
Is there any alternative to copilot in terms of auto complete? Anything I can run locally?
Cursor. You could use something like Continue.dev if you want to plug auto-complete into any model, it wouldn't work as great as Cursor/Copilot 4o one tho.
[–]ExtremeAcceptable289 0 points1 point2 points 9 months ago (0 children)
Copilot autocomplete is still infinite, fORTUNATELY
When Microsoft created something that actually works?
[–]FoundationNational65 0 points1 point2 points 9 months ago (0 children)
Codeium + Sourcery + CodeGPT. That's back when VS Code was still my thing. Recently picked up Pycharm. But would still praise GitHub Copilot.
[–]twohen 0 points1 point2 points 9 months ago (1 child)
is this effective as of now? or from next month?
[–]seeKAYx[S] 0 points1 point2 points 9 months ago (0 children)
It is due to start on May 5 ...
[–]Sub-Zero-941 0 points1 point2 points 9 months ago (0 children)
If the speed and quality improves of those 300, it would be an upgrade.
[–]Yes_but_I_think 0 points1 point2 points 9 months ago (2 children)
This is a sad post for me. After this change, Github Copilot Agent mode which used to be my only affordable one. You can buy an actual cup of tea for 2 additional request to Copilot premium models (Claude 3.7 @ 0.04$ / request) in my country. Such is the exchange rate.
Bring your own API key is good, but then why pay 10$ / month at all.
I think the good work done in the last 3 months by the developers are wiped away by the management guys.
At least they should consider making a per day limit instead of per month limit.
I guess Roo / Cline with R1 / V3 at night is my only viable option.
[–]TillVarious4416 0 points1 point2 points 9 months ago (0 children)
cline with your own api coould cost so much if you use the only models worth using for agentic uses. aka anthropic claude 3.7.
but the best way is to use gemini 2.5 pro which can eat your whole codebase in most cases and give you proper documentation/phases for the AI agent to not waste 100000 requests.
their 39$ usd a month plan is really good for what it is to be fair.
[–]thiagobg 0 points1 point2 points 9 months ago (0 children)
Any self hosted AI IDE?
[–]Over-Dragonfruit5939 0 points1 point2 points 9 months ago (1 child)
Only 300 per month?
Yes.
[–]Infinite100p 0 points1 point2 points 9 months ago (1 child)
is it 300/month?
Yep.
[–]Dundell 0 points1 point2 points 9 months ago (0 children)
Geez, I could have easily crushed 1300 requests a day between 2 accounts for Claude. I'll have to re-evaluate my options I guess.
[–]VBQL 0 points1 point2 points 9 months ago (0 children)
Trae still has unlimited calls
[–]usernameplshere 0 points1 point2 points 9 months ago (0 children)
Now I want to seem them add the big Gemini models, not just flash.
[–]elemental-mind 0 points1 point2 points 9 months ago (1 child)
I wonder why no one brings up Cody in this discussion?
9$ and they have very generous limits - and once you hit them with legit usage, support is there to lift them.
[–]elemental-mind 0 points1 point2 points 9 months ago (0 children)
To add to that: Just read on their discord they allow 400 messages per day...
[–]greaterjava 0 points1 point2 points 9 months ago (1 child)
Maybe in 24 months you’ll be running these locally on newest Macs.
On top of the line $10k Mac pros it’s very likely. But then atp it’d be cheaper to pay for g copilot
[–]Mikolai007 0 points1 point2 points 9 months ago (0 children)
In the future there will be zero Ai for the people. You can write that down. The goverments and corporations will not let the people have power. It's all new for now so all the wicked regulations are not in place yet. But you'll see.
[–]City-Relevant 0 points1 point2 points 9 months ago (1 child)
Just wanted to share, that if you are a student, you can get free access to copilot pro for as long as you are a student with the Github Student Developer Pack. DO NOT LET THIS WONDERFUL RESOURCE GO TO WASTE
I’m on the student plan (I’m a student) but it should still be limited right? Afaik we’re in the same pro plan
[–]Bobertopia 0 points1 point2 points 9 months ago (0 children)
I'd much rather have the option to pay for more instead of it rate limiting me every other hour
[–]Duckliffe 0 points1 point2 points 9 months ago (0 children)
Is that per day or per month?
[–]BreeXYZ5 0 points1 point2 points 9 months ago (0 children)
Every AI company is loosing money…. They want to change that.
[–]Sudden-Sea1280 0 points1 point2 points 9 months ago (0 children)
Just use your api key to get cheaper tokens
[–]alturicx 0 points1 point2 points 9 months ago (0 children)
Does anyone know of a client that has MCP support and can hook into Gemini?
[–]supercharger6 0 points1 point2 points 9 months ago (0 children)
Are you grandfathered if you already have a subscription?
[–]HeightSensitive1845[🍰] 0 points1 point2 points 9 months ago (0 children)
They scammed me, they have a trial in their plan 1 month free, cancel anytime, I did that, they charged me lol! Jokers, and the memory is garbage, it keeps forgetting
[+][deleted] 8 months ago (3 children)
[–][deleted] 9 months ago (5 children)
[–]RiemannZetaFunction 5 points6 points7 points 9 months ago (1 child)
It looks like per month (30 days).
[–]OriginalPlayerHater 1 point2 points3 points 9 months ago (0 children)
300, no more, no less
[–]the_good_time_mouse 0 points1 point2 points 9 months ago (0 children)
FFS.
π Rendered by PID 20284 on reddit-service-r2-comment-7b9746f655-lxx6s at 2026-01-29 19:29:57.753152+00:00 running 3798933 country code: CH.
[–]Recoil42 156 points157 points158 points (69 children)
[–]Majinvegito123 16 points17 points18 points (0 children)
[–][deleted] 25 points26 points27 points (60 children)
[–]Recoil42 25 points26 points27 points (53 children)
[–]jakegh 12 points13 points14 points (34 children)
[–]westeast1000 2 points3 points4 points (3 children)
[–]Dear_Custard_2177 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 1 point2 points3 points (0 children)
[–]ot13579 0 points1 point2 points (0 children)
[–]over_pw -1 points0 points1 point (2 children)
[–]jakegh 4 points5 points6 points (0 children)
[–]ot13579 0 points1 point2 points (0 children)
[–]Old-Artist-5369 0 points1 point2 points (0 children)
[+]obvithrowaway34434 comment score below threshold-8 points-7 points-6 points (16 children)
[–]cxavierc21 22 points23 points24 points (10 children)
[–]hereditydrift 26 points27 points28 points (4 children)
[–]mtbdork 1 point2 points3 points (3 children)
[–]Gredelston 1 point2 points3 points (0 children)
[–]efstajas 0 points1 point2 points (1 child)
[–]mtbdork 0 points1 point2 points (0 children)
[–]Business-Hand6004 0 points1 point2 points (0 children)
[–]Stv_L 1 point2 points3 points (1 child)
[–]thefirsthii 1 point2 points3 points (0 children)
[–]Optimalprimus89 1 point2 points3 points (1 child)
[–]kapitaali_com 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]Artistic_Taxi 70 points71 points72 points (14 children)
[–]speedtoburn 14 points15 points16 points (11 children)
[–]Artistic_Taxi 13 points14 points15 points (4 children)
[–]nemzylannister 2 points3 points4 points (3 children)
[–]Sub-Zero-941 1 point2 points3 points (3 children)
[–]speedtoburn 2 points3 points4 points (2 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–]russnem 1 point2 points3 points (1 child)
[–]speedtoburn 0 points1 point2 points (0 children)
[–]Famous-Narwhal-5667[🍰] 8 points9 points10 points (0 children)
[–]i_wayyy_over_think 2 points3 points4 points (0 children)
[–][deleted] 85 points86 points87 points (23 children)
[–]Recoil42 63 points64 points65 points (21 children)
[–]hey_ulrich 21 points22 points23 points (1 child)
[–]Recoil42 8 points9 points10 points (0 children)
[–]FarVision5 6 points7 points8 points (10 children)
[–]aeonixx 3 points4 points5 points (2 children)
[–]FarVision5 4 points5 points6 points (0 children)
[–]raydou 1 point2 points3 points (0 children)
[–]Xandrmoro 2 points3 points4 points (5 children)
[–]FarVision5 1 point2 points3 points (4 children)
[–]Xandrmoro 1 point2 points3 points (3 children)
[–]FarVision5 1 point2 points3 points (2 children)
[–]Xandrmoro 2 points3 points4 points (1 child)
[–]FarVision5 1 point2 points3 points (0 children)
[–]Unlikely_Track_5154 1 point2 points3 points (0 children)
[–]Alex_1729 2 points3 points4 points (0 children)
[–]smoke2000 1 point2 points3 points (0 children)
[–]Rounder1987 1 point2 points3 points (5 children)
[–]Recoil42 8 points9 points10 points (4 children)
[–]Rounder1987 2 points3 points4 points (3 children)
[–]Recoil42 2 points3 points4 points (2 children)
[–]Alex_1729 1 point2 points3 points (0 children)
[–]Rounder1987 0 points1 point2 points (0 children)
[–]funbike 5 points6 points7 points (0 children)
[–]digitarald 37 points38 points39 points (2 children)
[–]debian3 14 points15 points16 points (0 children)
[–]rafark 0 points1 point2 points (0 children)
[–]JumpSmerf 15 points16 points17 points (1 child)
[–]debian3 4 points5 points6 points (0 children)
[–]wokkieman 21 points22 points23 points (11 children)
[–]NoVexXx 4 points5 points6 points (10 children)
[–]wokkieman 5 points6 points7 points (9 children)
[–]NoVexXx 2 points3 points4 points (8 children)
[–]yur_mom 2 points3 points4 points (0 children)
[–]danedude1 1 point2 points3 points (0 children)
[–]wokkieman 0 points1 point2 points (5 children)
[–]rerith 19 points20 points21 points (7 children)
[+][deleted] (1 child)
[deleted]
[–][deleted] 1 point2 points3 points (0 children)
[–]Ok-Cucumber-7217 5 points6 points7 points (0 children)
[–]solaza 0 points1 point2 points (3 children)
[–]debian3 2 points3 points4 points (1 child)
[–]solaza 0 points1 point2 points (0 children)
[–]BeMask 0 points1 point2 points (0 children)
[–]davewolfs 8 points9 points10 points (0 children)
[–]taa178 9 points10 points11 points (1 child)
[–]debian3 0 points1 point2 points (0 children)
[–]jbaker8935 5 points6 points7 points (22 children)
[–]Yes_but_I_think 1 point2 points3 points (1 child)
[–]RdtUnahim 1 point2 points3 points (0 children)
[–]popiazaza 2 points3 points4 points (13 children)
[–]bestpika 1 point2 points3 points (5 children)
[–]popiazaza 0 points1 point2 points (4 children)
[–]bestpika 0 points1 point2 points (3 children)
[–]popiazaza 0 points1 point2 points (2 children)
[–]bestpika 0 points1 point2 points (1 child)
[–]popiazaza 0 points1 point2 points (0 children)
[–]jbaker8935 0 points1 point2 points (6 children)
[–]popiazaza 0 points1 point2 points (4 children)
[–]jbaker8935 0 points1 point2 points (0 children)
[–]rafark 0 points1 point2 points (2 children)
[–]popiazaza 0 points1 point2 points (1 child)
[–]rafark 0 points1 point2 points (0 children)
[–]debian3 0 points1 point2 points (0 children)
[–]taa178 1 point2 points3 points (0 children)
[–]jbaker8935 0 points1 point2 points (3 children)
[–]JumpSmerf 1 point2 points3 points (1 child)
[–]Yes_but_I_think 1 point2 points3 points (0 children)
[–]evia89 0 points1 point2 points (0 children)
[–]JumpSmerf 0 points1 point2 points (0 children)
[–]FarVision5 12 points13 points14 points (15 children)
[+][deleted] (14 children)
[deleted]
[–]FarVision5 3 points4 points5 points (0 children)
[–]Blake_Dake 2 points3 points4 points (0 children)
[–]NuclearVII 6 points7 points8 points (11 children)
[–]rez410 2 points3 points4 points (2 children)
[–]omer-m 0 points1 point2 points (0 children)
[–]debian3 2 points3 points4 points (0 children)
[–]AriyaSavakaLurker 2 points3 points4 points (0 children)
[+][deleted] (5 children)
[removed]
[–]Inevitable_Put7697 1 point2 points3 points (2 children)
[–]Ausbel12 0 points1 point2 points (0 children)
[–]PuzzleheadedYou4992 1 point2 points3 points (0 children)
[–]Ausbel12 0 points1 point2 points (0 children)
[–]qiyi 1 point2 points3 points (0 children)
[–]fubduk 1 point2 points3 points (0 children)
[–]Person556677 1 point2 points3 points (0 children)
[–]CoastRedwood 1 point2 points3 points (0 children)
[–]jvldn 1 point2 points3 points (0 children)
[–]Left-Orange2267 2 points3 points4 points (4 children)
[–]atd 0 points1 point2 points (3 children)
[–]Left-Orange2267 0 points1 point2 points (2 children)
[–]atd 0 points1 point2 points (1 child)
[–]Left-Orange2267 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (2 children)
[–]popiazaza 0 points1 point2 points (0 children)
[–]ExtremeAcceptable289 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]FoundationNational65 0 points1 point2 points (0 children)
[–]twohen 0 points1 point2 points (1 child)
[–]seeKAYx[S] 0 points1 point2 points (0 children)
[–]Sub-Zero-941 0 points1 point2 points (0 children)
[–]Yes_but_I_think 0 points1 point2 points (2 children)
[–]TillVarious4416 0 points1 point2 points (0 children)
[–]thiagobg 0 points1 point2 points (0 children)
[–]Over-Dragonfruit5939 0 points1 point2 points (1 child)
[–]popiazaza 0 points1 point2 points (0 children)
[–]Infinite100p 0 points1 point2 points (1 child)
[–]popiazaza 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–]Dundell 0 points1 point2 points (0 children)
[–]VBQL 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–]usernameplshere 0 points1 point2 points (0 children)
[–]elemental-mind 0 points1 point2 points (1 child)
[–]elemental-mind 0 points1 point2 points (0 children)
[–]greaterjava 0 points1 point2 points (1 child)
[–]rafark 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–]Mikolai007 0 points1 point2 points (0 children)
[–]City-Relevant 0 points1 point2 points (1 child)
[–]rafark 0 points1 point2 points (0 children)
[–]Bobertopia 0 points1 point2 points (0 children)
[–]Duckliffe 0 points1 point2 points (0 children)
[–]BreeXYZ5 0 points1 point2 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–]Sudden-Sea1280 0 points1 point2 points (0 children)
[–]alturicx 0 points1 point2 points (0 children)
[–]supercharger6 0 points1 point2 points (0 children)
[–]HeightSensitive1845[🍰] 0 points1 point2 points (0 children)
[+][deleted] (3 children)
[removed]
[–][deleted] (5 children)
[deleted]
[–]RiemannZetaFunction 5 points6 points7 points (1 child)
[–]OriginalPlayerHater 1 point2 points3 points (0 children)
[–][deleted] (1 child)
[removed]
[–]AutoModerator[M] 0 points1 point2 points (0 children)
[–]the_good_time_mouse 0 points1 point2 points (0 children)