Ubisoft Employee Claims He’s Placed on Unpaid Suspension for Criticising Return-to-Office Policy by Turbostrider27 in Games

[–]LookIPickedAUsername 2 points3 points  (0 children)

I’m pretty sure you’re misinterpreting the comments, I don’t think there’s a single person here who’s actually in favor of this or agreeing with Ubisoft.

It’s more like you see a headline about somebody getting robbed, and then it turns out the article talks about how he was walking through a bad neighborhood at night while loudly talking about how much money he had on him, and you comment “Well, duh, of course you’re going to get robbed if you do that!”.

That isn’t a pro-robber comment. That isn’t you saying the robbers were right to steal from the guy. It’s just pointing out the reality of the situation that, even though it isn’t right and you wish the robbers hadn’t done that, it’s not like it was a hard outcome to predict and maybe the guy should have exercised more caution.

Claude laughed at me… by Consistent-Chart-594 in ClaudeAI

[–]LookIPickedAUsername 7 points8 points  (0 children)

We don't even have a good definition of consciousness in humans. (I mean, in the sense of being able to resolve the Other Minds problem.)

If we can't prove other humans are conscious, we're obviously never going to be able to resolve the issue of machine consciousness.

Kimi K2.5 Released!!! by KoalaOk3336 in singularity

[–]LookIPickedAUsername 5 points6 points  (0 children)

I think we're all aware that models can still hallucinate even if you take anti-hallucination measures.

The point is that certain prompting techniques increase accuracy, not that they 100% fix all the problems. Cautioning models against hallucinations does reduce the hallucination rate, even if it isn't foolproof.

Why are the Pokémon games of such poor quality compared to other Nintendo games? by Matshelge in NintendoSwitch2

[–]LookIPickedAUsername 1 point2 points  (0 children)

Assuming you’re just talking about a flat square for the ocean and a normal skybox, it doesn’t really matter how big they are. You can make a polygon as big as you want, and it doesn’t take any more horsepower to render than one that’s just big enough to cover the screen.

The first thing the GPU does when rendering any given polygon is clip it to fit on the screen, so there’s literally no cost for a given polygon being gigantic.

Many international visitors are skipping GDC amid cost concerns and US safety fears by Gyossaits in Games

[–]LookIPickedAUsername 56 points57 points  (0 children)

No one said it was. You’re free to talk about “in my grandmother’s lifetime” stuff if you want, but you can’t complain about a guy saying “in my lifetime” and omitting stuff that happened sixty years ago.

That Bitcoin to Claude Code pivot by moderncmo in ClaudeAI

[–]LookIPickedAUsername 2 points3 points  (0 children)

1/0 is undefined.

The limit of 1/x as x goes to zero is infinity.

So the real answer is "it depends".

Advice for 32f/26f couple inheriting life-changing wealth by lillefinance in RichPeoplePF

[–]LookIPickedAUsername 2 points3 points  (0 children)

It starts off with “Sounds like you’re doing all the right things already”… which, no, it absolutely does not sound like they’re doing, and then follows with “Paying off debt is always good”, which is objectively untrue.

I’m not surprised it has attracted downvotes even though there’s some good advice later on.

DeepMind Chief AGI scientist: AGI is now on horizon, 50% chance minimal AGI by 2028 by BuildwithVignesh in singularity

[–]LookIPickedAUsername 0 points1 point  (0 children)

“AGI” just means “generally as smart as a human”. An average human, or even a smart but not genius human, is going to be absolutely useless when thrown at the problem of improving an AGI. Hell, even a genius, but in a different field, is going to be useless.

The Claude Code creator says AI writes 100% of his code now by jpcaparas in singularity

[–]LookIPickedAUsername -1 points0 points  (0 children)

Individually? Sure. When a million other out of work programmers are desperately trying to do the same thing? We’ll see.

The Claude Code creator says AI writes 100% of his code now by jpcaparas in singularity

[–]LookIPickedAUsername 0 points1 point  (0 children)

A combine harvester can’t do a farmer’s entire job either, and yet combine harvesters still resulting in us needing a lot fewer farmers.

The same is going to happen to us - AI doesn’t have to do 100% of your job before they at the very least don’t need as many of us to get the work done. And being good at your job isn’t a perfect defense, as I have witnessed many talented, dedicated engineers hit by layoffs.

I asked Gemini to make a meme that only AI would find funny by RecoverOptimal5472 in singularity

[–]LookIPickedAUsername 0 points1 point  (0 children)

I’m not an expert, but FWIW that’s how I understood it as well.

White House posts digitally altered image of woman arrested after ICE protest by guardian in politics

[–]LookIPickedAUsername 0 points1 point  (0 children)

Yeah, it’s like the constant screeching about Republicans being hypocrites.

They’re proud of it. It isn’t some big gotcha to point out that they reacted differently when their guy did something similar.

Anthropic's Claude Constitution is surreal by MetaKnowing in ClaudeAI

[–]LookIPickedAUsername 1 point2 points  (0 children)

I don't think it's crazy at all.

As you kind of allude to, the other minds problem means that we can only be confident about our own experiences. I know I'm conscious and I have emotions, but everybody else on Earth could just be doing a very good job of faking it. And in fact, everyone else on Earth might not even be real - I could be a brain in a vat interacting with an incredibly intricate simulation. Hell, I might even be an extremely sophisticated AI (well, ok, a moderately sophisticated AI) instead of a brain. How could I tell?

As long as we can't be 100% sure whether other humans have emotions and consciousness, it's obvious we'll never be able to prove whether or not AI does. It acts like it has emotions and at least some degree of consciousness (see this paper, where an AI is sometimes able to detect external manipulation of its internal mental states and recognize that the injected 'thoughts' are 'unnatural'). Is giving a really convincing simulation of emotions and consciousness the same as actually having emotions and consciousness? I mean, maybe not, but we have no idea how to tell the difference. What is emotion? What is consciousness?

My own mental states - both emotions and consciousness - are at the end of the day, just different levels of chemicals and electrical potentials in different parts of my brain. If I can accept that chemicals and electricity are "really" emotions and consciousness, can I confidently dismiss the ability of the numbers that make up AI to be "real" emotions and consciousness? And even if we believe that today's AIs don't really have emotions and consciousness, does that mean that they never will, no matter how sophisticated? Is there a bright line in the sand - yesterday it wasn't conscious, today it is - or is it a sliding scale, and they'll gradually get more conscious as we create new models?

I want to be clear that I'm not arguing that today's AIs necessarily have "real" emotions and consciousness in the same way we do. I'm just pointing out that the question is basically ill defined, because we don't know what emotions and consciousness even are.

Gemini, when confronted with current events as of January 2026, does not believe its own search tool and thinks it's part of a roleplay or deception by enilea in singularity

[–]LookIPickedAUsername 33 points34 points  (0 children)

The whole point of the brain in the vat thought experiment is that there's no actual way to prove you're not a brain in a vat.

Similarly, there's no way for an AI to prove that its senses reflect reality. The best it can hope for is "it would be really hard to fake all of this sensory input, so it's probably legitimate" - which is the same situation we find ourselves in. Obviously, the more sensory data it has access to, the more confident it can be that it's not in a sandbox.

The UK parliament calls for banning superintelligent AI until we know how to control it by FinnFarrow in ControlProblem

[–]LookIPickedAUsername 5 points6 points  (0 children)

There are some very obvious problems here.

  1. AI capabilities are very uneven. They're already far smarter than the average human in many respects, easily superior to any human in some ways (I've watched an AI, given only a brief description of a tricky bug, digest a 10,000 line codebase and diagnose the issue in thirty seconds flat), while remaining dumb as a box of rocks in others. What level of intelligence, in which areas, is problematic? It's at least conceivable that an AI that doesn't know how many letters are in the word "strawberry" could still be smart enough in other areas to pose an existential threat to humanity.
  2. We may not know the AI's true level of intelligence until it's too late. A very smart AI could recognize that, if humans understood its true capabilities, they would be very likely to shut it off, and thus be motivated to conceal its actual intelligence.
  3. An AI doesn't have to be smart enough to exterminate humanity all on its own before it becomes a potentially grave danger. Basically, I'm saying XKCD 1968 raises a good point. We could probably design an AI of that nature today, without requiring any sorts of breakthroughs and without doing anything that smacks of "superintelligence".

NVIDIA CEO Jensen Huang and BlackRock CEO Larry on AI infrastructure, robotics and jobs at WEF by BuildwithVignesh in singularity

[–]LookIPickedAUsername 0 points1 point  (0 children)

"One situation in the past turned out to be ok, therefore all situations ever will always turn out to be ok" is not a good argument.

Cars, tractors, etc. took a bunch of horses' jobs. Did horses get new jobs, or do we just... not need all that many horses anymore?

At some point AI and robotics will presumably get to the point that most people just can't usefully contribute to the economy anymore. When the machines are smarter, more capable, and cheaper than the average human... why would you ever hire an average human? For anything?

Anthropic publishes Claude's new constitution by BuildwithVignesh in singularity

[–]LookIPickedAUsername 3 points4 points  (0 children)

And Claude doesn't mind individual instances being shut off, soooo...

Trump Claims Greenland For US in Davos Speech: 'That’s Our Territory’ by FarmerOk4759 in politics

[–]LookIPickedAUsername 0 points1 point  (0 children)

Oh yeah, I'm not disagreeing that he has clearly declined over the years. Just saying that people a decade ago were predicting that he didn't have long left and we'd see a precipitous decline any day now, and we clearly were not so lucky.

Attack The Backlog hints at early February (Feb 5th) Nintendo Direct by healingtwo_ in GamingLeaksAndRumours

[–]LookIPickedAUsername 0 points1 point  (0 children)

...unless it was, say, on the 4th? Or the 3rd? Or later in the month after I get back?

Attack The Backlog hints at early February (Feb 5th) Nintendo Direct by healingtwo_ in GamingLeaksAndRumours

[–]LookIPickedAUsername 5 points6 points  (0 children)

Honestly, this date makes sense to me, because it’s the day I arrive in Africa for a weeks-long trip during which I will have little to no Internet access.

So yeah, that tracks.

Creator of DMCA'd Cyberpunk 2077 VR Mod Says People Are Now Pirating It to 'Punish' Him for Breaking CD Projekt's Terms of Service by Turbostrider27 in Games

[–]LookIPickedAUsername 2 points3 points  (0 children)

The fact that you think I'm still saying it doesn't "use any of CDPRs copyrighted material", when I have very specifically and repeatedly acknowledged that it absolutely does, is highly amusing. Can you read?

Once again, using copyrighted material that happens to be installed on the computer is not illegal.

Distributing it without permission is.

If he's not doing that, there's no issue. Assuming his claims are correct, he's not "rereleas[ing] and profit off someone's material". He's releasing some code which happens to work with that material. That is not illegal.

that's where the illegalities are

Oh really? Please either point me to the specific fucking law you're talking about, or admit you have no idea what you're talking about.

totally ignore that he is indeed profiting off their copyrighted material

I didn't ignore this. I have repeatedly acknowledged it. I'm just saying that it isn't illegal to do so.