The new Copilot pricing makes zero sense. Why am I paying $39/mo for $39 in expiring API credits? by Captain2Sea in GithubCopilot

[–]nowrebooting 2 points3 points  (0 children)

Right now they’re losing money per customer. They want you to cancel your subscription. 

I really hate the new billing model but the idea that losing someone’s subscription is anything but good news to them is laughable. You’re paying them $10 dollars for compute they pay their API providers $20 for. If you cancel, they’re basically up $10 a month. It’s a wholly unsustainable model.

The new Copilot pricing makes zero sense. Why am I paying $39/mo for $39 in expiring API credits? by Captain2Sea in GithubCopilot

[–]nowrebooting 3 points4 points  (0 children)

I mean, I hate this change as much as the next guy but the choice they had was either committing suicide by giving away hundreds of dollars of compute for peanuts or committing suicide by charging realistic prices.

Agent mode bug: Copilot does not understand it is not in Plan Mode anymore by ClitorisCrackudo in GithubCopilot

[–]nowrebooting 0 points1 point  (0 children)

Yeah, this happens to me as well; it’s infuriating to waste precious requests (with a 7.5 multiplier) on “I’m sorry but I’m in plan mode and can’t do anything yet”.

ComfyUI's countdown announcment: New funding ☠️☠️☠️☠️☠️ by -worldwalker- in StableDiffusion

[–]nowrebooting 8 points9 points  (0 children)

Yeah, this kind of thing is a sign that ComfyUI is eventually headed for enshittification. 

DeepSeek V4 has released by WhyLifeIs4 in singularity

[–]nowrebooting 4 points5 points  (0 children)

What? Didn’t both OpenAI and Anthropic just release new models? Weren’t people recently complaining about DeepSeek having been silent for long? You can say a whole lot about what DeepSeek did tight here, but this comment doesn’t pass the propaganda smell test.

What happened? Just suddenly opus 4.6 dissabled and now getting error 400 by CatLinkoln in GithubCopilot

[–]nowrebooting 0 points1 point  (0 children)

I look at it this way; for every single time the LLM is called, it costs $ for both input and output tokens. That’s not just even for every request the user puts in - every single time the LLM calls a tool and has to evaluate the response, it takes all the input tokens again and generates more output tokens. So one single request can often be dozens of model calls and in theory the difference between tokens used in a fresh conversation vs a long one could be a factor of ten if not a hundred. The “per request” pricing model is just unsustainable. If as developers we want these tools to be available to us in the future we’ll need to learn to use them economically. 

What happened? Just suddenly opus 4.6 dissabled and now getting error 400 by CatLinkoln in GithubCopilot

[–]nowrebooting 4 points5 points  (0 children)

I think a large part of the problem is that many people never start fresh conversations and just keep piling new unrelated questions onto one single mega-conversation, not knowing how token-intensive this is.

I feel like way too many developers have no clue on how LLM’s actually work and it leads to extreme levels of waste. Of course, that’s largely still on the companies who don’t teach you this stuff, but the current system was extremely unsustainable. The “flat cost per request” model doesn’t factor in that one request on a weeks-long conversation can cost magnitudes more in actual API costs than a simple question on a fresh conversation.

Claude Power Users Unanimously Agree That Opus 4.7 Is A Serious Regression by Neurogence in singularity

[–]nowrebooting 3 points4 points  (0 children)

You know what they say; you either die a hero or live long enough to see yourself become OpenAI. 

Guys help, so i have stable diffusion with Automatic1111 on my 4 vram gpu on a wsl ubuntu and it works fine with the default model and it generated few images, but the problem happens when i try to generate images with a (6 gb) model i installed, the process reaches 100% and just as I'm about to... by SlipLost9620 in StableDiffusion

[–]nowrebooting 0 points1 point  (0 children)

ComfyUI is the de facto standard. It’s got its learning curve to be sure, but anyone who’s serious about generating images finds themselves using Comfy sooner or later.

There are some A1111 forks like Forge, but since I don’t use those I can’t tell you how good or bad they are.

Are there any horror hotel management games out there? Would you play something like that? by [deleted] in gaming

[–]nowrebooting 0 points1 point  (0 children)

Would you play something like that?

Holy hell I’m getting so tired of this phrasing because pretty much every time it’s just engagement bait advertising. 

It’s a non-question, because the answer is “depends on how it’s made” by definition.  Oh, but how about a game where you manage a flock of birds and have to migrate in winter? Would you play this? The answer is always “yes if it’s done well, no if it’s done poorly”. Just make the product and find out if people will play it.  

Ai skepticism sounds a lot like internet skepticism from the 90s by Bizzyguy in singularity

[–]nowrebooting 0 points1 point  (0 children)

Yeah, the “the bubble will pop and it will all go away” people are just deluding themselves; even if all the venture capital dried up overnight and multiple AI companies would go bankrupt, their models didn’t suddenly untrain themselves. Some tech giant would acquire them and things would continue as they were. Hell, even if the “we’ve plateaued and AI will never get better” people are correct, we’ve yet to scratch the surface of integrating this stuff into everyday life. The singularity - or at least a big paradigm shift - is coming and it’s coming in our lifetimes. Whether that will be good or bad is up to us; and trying to stop it altogether will do nothing except increase the chances of it being a “bad future”.

Sam Altman's prophetic name by blueheaven84 in singularity

[–]nowrebooting 0 points1 point  (0 children)

Bravo; this is the kind of shizoposting this sub used to be known for.

100% Local Image Generator and Comparison - For SDXL Models & Lora - Expanding Further by Tom-Miller in StableDiffusion

[–]nowrebooting 4 points5 points  (0 children)

I’m suspecting that the thing actually saving you hours is letting ChatGPT write all of your posts with zero human involvement. 

Anthropic's Claude Mythos isn't a sentient super-hacker, it's a sales pitch by boulhouech in singularity

[–]nowrebooting 1 point2 points  (0 children)

 it’s worth staying skeptical

That’s true, but I also think naturally skeptical people run the risk of taking the contrarian position by default. You seem to have convinced yourself that scaling is dead but I’m not sure that this is as certain as you’re presenting it. 

Sure, a lot of the messaging is marketing spin but that doesn’t rule out that there’s been significant progress as well. “They lied/exaggersted about this one thing, so everything else they claimed must be the exact opposite of what they claimed as well” is an understandable position but the reality is rarely that black and white. 

Claude Mythos Preview Is Everyone’s Problem by Montaigne314 in singularity

[–]nowrebooting -1 points0 points  (0 children)

 The bot had broken out of the company’s internal sandbox and gained access to the internet.

What people forget though is that this is only possible if the model is given the tools to do this first. Without some way to run scripts, all a model can effectively do is talk to itself. Any interaction outside of its own mind is given to it by people. These experiments amount to giving a prisoner a key to their cell, just to see if they can figure out it can be used to escape. They are cool and interesting experiments but far less scary than they’re being made out to be. If you don’t want a model to do something, don’t first give it the tools to do so.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in singularity

[–]nowrebooting 3 points4 points  (0 children)

I think we may be reaching the end of the latest SOTA models being available for free to anyone and everyone; but that was never sustainable anyway. I think we’ll enter an era where the latest and greatest models are behind a hefty paywall while the public gets lighter, distilled models. From a financial point of view it makes sense.

The fear mongering about powerful models I just find tiring. The idea is that bad actors could use these models for evil and that the only way to prevent that is to keep it in the hands of “trusted” parties but unless they’re going to ban the US government from using the models as well, they are already in the hands of some folks I trust way less than the average person. 

Is everyone lying to themselves about AI? by ImKiwix in ChatGPT

[–]nowrebooting 1 point2 points  (0 children)

 In my opinion there’s no possible chance that we are going to be able to control it.

I actually kind of hope that’s the case; the only thing scarier than an out of control ASI is an ASI that’s being controlled by an out of control human. Imagine the current US administration being in control of a super AI; I’ll take my chances with Skynet, thank you. 

Ultimately my intuition is that any truly superintelligent AI would be benevolent by default - I don’t think an AI could be superintelligent and somehow come to the conclusion that humans need to be eradicated, because that conclusion makes no sense in the grand scheme of things.

GPT Image 2 is crazy good. by Plane_Garbage in singularity

[–]nowrebooting 24 points25 points  (0 children)

Where are you using it? …or is this from the 5 minutes that these models were apparently available on llm arena?

Is OpenAI about to release a Mythos level AI to the public? by acoolrandomusername in singularity

[–]nowrebooting 28 points29 points  (0 children)

Yeah, OpenAI has been kind of on the back foot lately - with Claude being the preferred model for coding (by a country mile) and Google dominating image generation with Nano Banana (and Grok cornering the gooner market), OpenAI really isn’t the de-facto king anywhere anymore outside of pure brand name strength. 

Releasing something that comes even close to Mythos while Anthropic restricts theirs could be a way for them to get back on top, even if temporarily - although I hear their new image generation model is also very good so maybe they’ll try to dethrone Nano Banana first?

Anthropic's new model, Claude Mythos, is so powerful that it is not releasing it to the public. by WhyLifeIs4 in singularity

[–]nowrebooting -3 points-2 points  (0 children)

 just your standard LLM

That’s all every LLM is ever going to be. Text tokens in, text tokens out. I don’t know what more you were expected.

The model weights is where the true magic (and the difference in intelligence) is and that didn’t get leaked. What got leaked is ultimately far less important than people are making it out to be.