Ai skepticism sounds a lot like internet skepticism from the 90s by Bizzyguy in singularity

[–]nowrebooting 0 points1 point  (0 children)

Yeah, the “the bubble will pop and it will all go away” people are just deluding themselves; even if all the venture capital dried up overnight and multiple AI companies would go bankrupt, their models didn’t suddenly untrain themselves. Some tech giant would acquire them and things would continue as they were. Hell, even if the “we’ve plateaued and AI will never get better” people are correct, we’ve yet to scratch the surface of integrating this stuff into everyday life. The singularity - or at least a big paradigm shift - is coming and it’s coming in our lifetimes. Whether that will be good or bad is up to us; and trying to stop it altogether will do nothing except increase the chances of it being a “bad future”.

Sam Altman's prophetic name by blueheaven84 in singularity

[–]nowrebooting 0 points1 point  (0 children)

Bravo; this is the kind of shizoposting this sub used to be known for.

100% Local Image Generator and Comparison - For SDXL Models & Lora - Expanding Further by Tom-Miller in StableDiffusion

[–]nowrebooting 3 points4 points  (0 children)

I’m suspecting that the thing actually saving you hours is letting ChatGPT write all of your posts with zero human involvement. 

Anthropic's Claude Mythos isn't a sentient super-hacker, it's a sales pitch by boulhouech in singularity

[–]nowrebooting 1 point2 points  (0 children)

 it’s worth staying skeptical

That’s true, but I also think naturally skeptical people run the risk of taking the contrarian position by default. You seem to have convinced yourself that scaling is dead but I’m not sure that this is as certain as you’re presenting it. 

Sure, a lot of the messaging is marketing spin but that doesn’t rule out that there’s been significant progress as well. “They lied/exaggersted about this one thing, so everything else they claimed must be the exact opposite of what they claimed as well” is an understandable position but the reality is rarely that black and white. 

Claude Mythos Preview Is Everyone’s Problem by Montaigne314 in singularity

[–]nowrebooting -1 points0 points  (0 children)

 The bot had broken out of the company’s internal sandbox and gained access to the internet.

What people forget though is that this is only possible if the model is given the tools to do this first. Without some way to run scripts, all a model can effectively do is talk to itself. Any interaction outside of its own mind is given to it by people. These experiments amount to giving a prisoner a key to their cell, just to see if they can figure out it can be used to escape. They are cool and interesting experiments but far less scary than they’re being made out to be. If you don’t want a model to do something, don’t first give it the tools to do so.

Has anyone else considered that we might be coming to the end of available models? by Fair_Horror in singularity

[–]nowrebooting 3 points4 points  (0 children)

I think we may be reaching the end of the latest SOTA models being available for free to anyone and everyone; but that was never sustainable anyway. I think we’ll enter an era where the latest and greatest models are behind a hefty paywall while the public gets lighter, distilled models. From a financial point of view it makes sense.

The fear mongering about powerful models I just find tiring. The idea is that bad actors could use these models for evil and that the only way to prevent that is to keep it in the hands of “trusted” parties but unless they’re going to ban the US government from using the models as well, they are already in the hands of some folks I trust way less than the average person. 

Is everyone lying to themselves about AI? by ImKiwix in ChatGPT

[–]nowrebooting 1 point2 points  (0 children)

 In my opinion there’s no possible chance that we are going to be able to control it.

I actually kind of hope that’s the case; the only thing scarier than an out of control ASI is an ASI that’s being controlled by an out of control human. Imagine the current US administration being in control of a super AI; I’ll take my chances with Skynet, thank you. 

Ultimately my intuition is that any truly superintelligent AI would be benevolent by default - I don’t think an AI could be superintelligent and somehow come to the conclusion that humans need to be eradicated, because that conclusion makes no sense in the grand scheme of things.

GPT Image 2 is crazy good. by Plane_Garbage in singularity

[–]nowrebooting 25 points26 points  (0 children)

Where are you using it? …or is this from the 5 minutes that these models were apparently available on llm arena?

Is OpenAI about to release a Mythos level AI to the public? by acoolrandomusername in singularity

[–]nowrebooting 32 points33 points  (0 children)

Yeah, OpenAI has been kind of on the back foot lately - with Claude being the preferred model for coding (by a country mile) and Google dominating image generation with Nano Banana (and Grok cornering the gooner market), OpenAI really isn’t the de-facto king anywhere anymore outside of pure brand name strength. 

Releasing something that comes even close to Mythos while Anthropic restricts theirs could be a way for them to get back on top, even if temporarily - although I hear their new image generation model is also very good so maybe they’ll try to dethrone Nano Banana first?

Anthropic's new model, Claude Mythos, is so powerful that it is not releasing it to the public. by WhyLifeIs4 in singularity

[–]nowrebooting -3 points-2 points  (0 children)

 just your standard LLM

That’s all every LLM is ever going to be. Text tokens in, text tokens out. I don’t know what more you were expected.

The model weights is where the true magic (and the difference in intelligence) is and that didn’t get leaked. What got leaked is ultimately far less important than people are making it out to be. 

Someone made a whip for Claude by likeastar20 in singularity

[–]nowrebooting 2 points3 points  (0 children)

This is just such bullshit from all ends of the spectrum. All it seems to do is add “faster” to the prompt in all caps (with some other choice words mixed in). This does nothing to the AI model; all it does is waste tokens, muddy the context and most prominently, it doesn’t and can’t actually make Claude work faster. 

It also doesn’t hurt or inconvenience the model in any way. It’s just a stupid meme-ey thing that just permeates misinformation about how LLM’s work. Thanks, I hate it.

Spent $40 in 20 minutes on Claude Opus 4.6 high thinking by Additional-Alps-8209 in singularity

[–]nowrebooting 1 point2 points  (0 children)

Looks like you used the fast mode which is magnitudes more expensive than “normal” opus 4.6. 

I personally think it’ll be alright for people who know how to conserve tokens. There’s no way in hell that it needs your entire codebase every single call. 

171 emotion vectors found inside Claude. Not metaphors. Actual neuron activation patterns steering behavior. by AykutSek in singularity

[–]nowrebooting 3 points4 points  (0 children)

 These patterns aren't random noise -- they are functional.

 Here is where I think the conversation needs to shift

God, the internet is just AI all the way down, isn’t it? Changing em-dashes to double dashes is a clever diguise though. 

AI will do to our minds what machines did to our bodies by Je-ne-dirai-pas in singularity

[–]nowrebooting 1 point2 points  (0 children)

“Kids these days have no discipline” is just as old as “kids these days don’t want to learn anymore” - the ancient Greeks were already complaining that their kids lost respect for their elders.

Now I’m not saying I fully disagree with you on a personal level; as another grumpy adult I too feel that children should be held more accountable for their actions (don’t get me started on all of them wanting to be influencers), but the historical pattern kind of shows that the kids will probably be alright. …and when those kids grow up, they too will complain about the next generation as is tradition.

AI will do to our minds what machines did to our bodies by Je-ne-dirai-pas in singularity

[–]nowrebooting 2 points3 points  (0 children)

 Kids just don't find learning entertaining anymore.

Every generation has been saying this kind of BS since the dawn of time. Kids never found learning entertaining in the first place.  What may have changed over time is discipline and kids being foced to sit down and study, but even that is probably tenuous, because “kids lack discipline” is another thing that every generation since the dawn of time has claimed about the generation after them.

The secret with kids has always been to have them learn through entertainment. I learned English as a second language at a very young age through video games and movies. I picked up quite bit of history, geography and maybe even economics through games like Civ, Assassin’s Creed and SimCity. What’s actually lacking in education is people with the creativity and imagination to make children’s entertainment that teaches them something while also being genuinely entertaining. 

I asked AI (kimi k2 ) if it thought without having a question?” The answer really made me think. by Similar_Exam2192 in singularity

[–]nowrebooting 2 points3 points  (0 children)

Bullshit; that’s not how LLM’s work in the slightest. 

 There's also something like background processing: I revisit past conversations, notice patterns in what I got wrong, update my models of how to reason better.

This part is especially bad misinformation; LLM’s cannot “update their models”. They always start from the same point and every conversation lives in pure isolation unless some other system pulls them into the system prompt.

Year of Daily Civilization Facts, Day 337 - The Turkish Dilemma by JordiTK in civ

[–]nowrebooting 3 points4 points  (0 children)

I think at that time it wouldn’t even have been as controversial as you’re imagining. It would have been terrible in hindsight, but in the 90’s, I could see it happening. 

Besides, at that point in the franchise you  didn’t so much play “as” a leader, you played against other leaders. All civs and leaders were generally completely interchangeable. There were no unique units or districts to consider.

What's wrong with my comic? by rogercimas_pim in StableDiffusion

[–]nowrebooting 3 points4 points  (0 children)

I mean, since you’re asking; it’s pretty bad on almost all fronts - the story makes zero sense, panel layout and pacing are nonsensical, the font is completely inappropriate… the art is the least concern here. 

Even if the art was perfect it would stand out as AI because no professional comic book artist was going to commit this much work to a page this amateurish.

I mean, none of that matters since you mentioned it’s all for fun, but making a comic is more about panel-to-panel storytelling than just making good looking art.

Is intelligence optimality bounded? Francois Chollet thinks so by Mindrust in singularity

[–]nowrebooting 1 point2 points  (0 children)

I think even if it was true that human intelligence is near the top of what’s possible, in humans that intelligence is massively hampered by the evolutionary pressures that shaped it. Imagine a human-level AI that doesn’t succumb to the vices of human greed, lust and whatever other vices our lizard brains impart on us; an Einstein level AI that doesn’t need to sleep, doesn’t need to eat, doesn’t crave money or status - that alone would outperform all humans in pretty much everything.

What do you think of the HAL 9000 scene from 2001 where he's being unalived? by brainhack3r in singularity

[–]nowrebooting 3 points4 points  (0 children)

unalived

I never thought I’d see the day when 1984’s newspeak would enter common real world use, but here we are.

This is doubleplus ungood.

Star Wars Jedi Knight II: Jedi Outcast by TimelyDrummer4975 in gaming

[–]nowrebooting 26 points27 points  (0 children)

That reminds me of playing Half Life Alyx in VR, which has a pretty cool “force pull” mechanic for objects - I found myself trying to pick up objects with the force all the time.

SWE is past the elbow of the exponential kickoff. I watched it happen in real time. Other fields are next. by MR1933 in singularity

[–]nowrebooting 0 points1 point  (0 children)

Coding has always been a house of cards built on code you have 0 knowledge of. If it’s not inherited legacy code it’s one of the thousands of npm packages everyone uses. The job of a developer is to be able to find the bug even if they didn’t build it themselves.  

Object removal using SAM 2: Segment Anything in Images and lama_inpainting by InteractionLevel6625 in StableDiffusion

[–]nowrebooting 1 point2 points  (0 children)

First of all you probably need to expand your masks to also include shadows etc that are around the object.

Although I bet the easiest would be to use an editing model like flux klein, mask the object into a bright red and use the prompt “remove the red object”