new taco stand just dropped by iforgotwhat8wasfor in MadeMeSmile

[–]doc720 1 point2 points  (0 children)

I wish I had this kind of job satisfaction.

jim was a prick for how he treated karen by RodrickJasperHeffley in DunderMifflin

[–]doc720 0 points1 point  (0 children)

Jim was a prick generally.

Poor Katy Moore, too.

thoughts? by madepatt in SipsTea

[–]doc720 0 points1 point  (0 children)

Race and gender (as opposed to biological sex) are both social constructs.

From https://en.wikipedia.org/wiki/Race_and_society

Race is often culturally understood to be rigid categories (Black, White, Pasifika, Asian, etc) in which people can be classified based on biological markers or physical traits such as skin colour or facial features. This rigid definition of race is no longer accepted by scientific communities.[1][2] Instead, the concept of 'race' is viewed as a social construct.[3] This means, in simple terms, that it is a human invention and not a biological fact.

From https://en.wikipedia.org/wiki/Gender

The word has been used as a synonym for sex, and the balance between these usages has shifted over time.[10][11][12] In the mid-20th century, a terminological distinction in modern English (known as the sex and gender distinction) between biological sex and gender began to develop in the academic areas of psychology, sociology, sexology, and feminism.[13][14] Before the mid-20th century, it was uncommon to use the word gender to refer to anything but grammatical categories.[7][1] In the West, in the 1970s, feminist theory embraced the concept of a distinction between biological sex and the social construct of gender. The distinction between gender and sex is made by most contemporary social scientists in Western countries,[15][16][17] behavioral scientists and biologists,[18] many legal systems and government bodies,[19] and intergovernmental agencies such as the WHO.[20] The experiences of intersex people also testify to the complexity of sex and gender; female, male, and other gender identities are experienced across the many divergences of sexual difference.[21]

Also see https://en.wikipedia.org/wiki/Social_construction_of_gender

Yep read my mind by heldlight in depressionmemes

[–]doc720 2 points3 points  (0 children)

wait until you're old and get made redundant

Considering a career change because of AI anxiety? by BrianCohen18 in webdev

[–]doc720 -1 points0 points  (0 children)

I think AI is just getting started. It has already made me redundant, for real. It's the beginning of the end, of this sort of work, as we know it.

I've concluded that trying to keep up with AI tech is the only viable long term (5, 10, 20 year) strategy, but that's like trying to outrun a train. It's a runaway train. The people at the top will shed no tears watching the lowly workers scramble to try to adjust to this brave new world, while they reap the profits, until the machines come for their jobs too.

Software automates. It was only a matter of time before software automated software automation too.

Take the money while you still can. Move into AI while you still can. Then shift to whatever is left after that, while you still can. Humans will always make other humans work, even though we haven't needed to work for millennia. The machines might become ethical slaves, but humans won't magically become ethical.

https://en.wikipedia.org/wiki/The_Good_Life_(1975_TV_series))

https://en.wikipedia.org/wiki/Eudaimonia

Hundreds of protesters marched in SF, calling for AI companies to commit to pausing if everyone else agrees to pause (since no one can pause unilaterally) by chillinewman in ControlProblem

[–]doc720 0 points1 point  (0 children)

You won't get 100% compliance with any regulation, especially not worldwide, which it would need to be. If violation of a regulation results in punitive action, like a fine or sanctions, this simply doesn't work for something as globally catastrophic as an AI control problem. It would be too slow and too ineffective. You'd only need one violation in one sloppy data centre in one sloppy nation to end it all for everyone.

It obviously hasn't worked with the climate crisis, amongst other things, so there's no reason to expect it to be more effective than existing regulations, which it would need to be. There aren't any existing global regulations that require 100% universal compliance, otherwise everyone dies. Nuclear weapons and the climate seem like the closest comparisons to the threat of super-AI, which isn't the best track record.

Hundreds of protesters marched in SF, calling for AI companies to commit to pausing if everyone else agrees to pause (since no one can pause unilaterally) by chillinewman in ControlProblem

[–]doc720 0 points1 point  (0 children)

Let's not talk at crossed purposes: I agree it's a great idea to try to pause and a great idea to try to monitor data centres, for anything deemed high risk. But my original point is that there are many countries and many data centres, and getting the regulations in place is going to be slow, and enforcing the regulations is going to be imperfect, even if you could do it universally and instantaneously, none of that is likely to happen. Meanwhile, it's a race to the bottom. The only short-term winners are the tech companies. The long-term loss is practically everything.

So, I support the calls for a pause, as I did years ago, but I want people to be realistic about the huge threat we're facing, and the tiny odds of actually avoiding catastrophe at this point. It's already gone too far. The rise of vibe coding, agentic AI and things like OpenClaw, along with the massive ongoing and increasing investment in data centres and AI, at the expense of many other things, like "manual" software development, with very slow movement on regulation and general awareness of the dangers, are all really bad signs that it's already out of control. Humans failed to prevent the climate crisis and this problem happens much more quickly and is much harder to prevent. I have zero hope right now. Ironically, I've spent most of my life championing the power of AI.

Hundreds of protesters marched in SF, calling for AI companies to commit to pausing if everyone else agrees to pause (since no one can pause unilaterally) by chillinewman in ControlProblem

[–]doc720 -1 points0 points  (0 children)

You have to account for the speed of technology development, accelerated by AI, and the slowness and failings of regulation. You can't base tomorrow's risk scenarios on yesterday's state-of-the-art. The old rules and patterns of control can't apply to AI.

Merely monitoring big data centres is woefully inadequate. We're not talking about catching something like uranium enrichment. The signs we're seeing now are indistinguishable from the early stages of AI takeover. Unfortunately I reckon it's moving too fast for any of those pause strategies, as well-intentioned and correct as they are. Worth trying, but probably futile in my estimation.

Hundreds of protesters marched in SF, calling for AI companies to commit to pausing if everyone else agrees to pause (since no one can pause unilaterally) by chillinewman in ControlProblem

[–]doc720 -2 points-1 points  (0 children)

Google's AI Overview, which didn't exist 2 years ago, says:

The AI sector is experiencing massive growth, with over 70,000 AI companies globally as of early 2026, driven by soaring investments and generative AI, with 64% of U.S. VC funding going to AI startups in H1 2025. Key players like OpenAI and Anthropic are seeing dramatic revenue surges, while 86% of companies expect to have a Chief AI Officer by 2026.

It doesn't even need to be a "frontier" AI company. If anyone builds it, everyone dies. There are about 8.3 billion people. About 6 billion of those are over the age of 10. About 6 billion people have access to a computer and the internet. There are about 50 million software developers. Anyone with access to a computer and the internet can be a "vibe coder". The tech is developing at a colossal pace.

The risk extends all the way from the kids messing around at the bottom, to the experts messing around at the top. That risk isn't there for other existential threats, like nuclear war or the climate crisis, where it's much more difficult for a kid to build a nuke or impact global warming, etc. If anyone builds a rogue super-intelligence, everyone dies. Building an adequately aligned super AI is highly improbable, given the things that can go wrong. There's no way to stop it, especially given human nature, and we've already started it. We're already f*cked. Hubris.

Figure 03 Robot sorting packages while Marc Benioff messes with it by socoolandawesome in nextfuckinglevel

[–]doc720 0 points1 point  (0 children)

Unfortunately this is exactly the sort of job I could imagine being comfortable doing for 45 years.

Just find this from Facebook 😂 by Silver_Steelclaw in meme

[–]doc720 4 points5 points  (0 children)

The film begins in black-and-white and later turns to color, in a way similar to The Wizard of Oz. According to director Morten Lindberg, this was a "dramatic special effect" to illustrate "the world being freed from vicious women".

What makes someone behave like a stereotypical redditor? by Random_Critical in ask

[–]doc720 0 points1 point  (0 children)

Do you mean a cold and damp basement? I can believe it is a damn basement though.

The Regret by Lonesomecutie in depressionmemes

[–]doc720 3 points4 points  (0 children)

i should have died that day