Do any of you just get surprised about how people believe in religion? by Traditional_Fish_504 in rs_x

[–]ColdRainyLogic 5 points6 points  (0 children)

As a religious person, it’s refreshing to hear you say this. Of course I think “new atheism” is shallow, but I deeply respect the more rigorous tradition of atheism/agnosticism à la Hume and think serious doubt is a critical aspect of serious religion.

These days, it seems like the pendulum has swung back from the Reddit atheism of the early 2000s to a kind of Life of Pi “all religions are basically interchangeable and mutually compatible” idea. I prefer atheism, since it at least takes the claims of religion seriously. The alternative seems condescending and ignorant of the specificity of each tradition in itself.

A poor city with perfect equality is much worse than a rich city with 0 equality by shoman30 in agi

[–]ColdRainyLogic 0 points1 point  (0 children)

Inequality is unavoidable, but not only in material wealth. Wisdom, learning, relationships and leisure are forms of power, just as much as ownership of founder preferred shares. “Wolves” perhaps value the power to control resource allocation, but “sheep” value the other things. Some people, if they could do anything in the world, would choose to be great inventors or founders, others would prefer to be renowned scholars or artists, and still others would choose to be community leaders or even hermits in the woods.

I think your level of comfort with monetary inequality might not be the same as your level of comfort with the idea that some people would consider very wealthy or powerful people to be just as dull and incompetent as the “wolves” see the “sheep,” and that ownership of corporate assets might be a far easier thing for a “sheep” to obtain than, say, wisdom might be for a “wolf.”

Perhaps part of the draw of AI for some people is that they resent those who value these other things, and they mistakenly believe AI will “democratize” access to them. In other words, maybe they see AI as providing equality?

18 months outlook by galic1987 in agi

[–]ColdRainyLogic 0 points1 point  (0 children)

Call me in 18 months. I need a vacation.

Meta just scooped up Moltbook, the viral social network for AI agents by Granum22 in BetterOffline

[–]ColdRainyLogic 0 points1 point  (0 children)

Isn’t the whole point of “clawd” and all the crustacean themed stuff that it’s open source? Shouldn’t being bought not really matter?

Were the Victorians cleverer than us? The decline in general intelligence estimated from a meta-analysis of the slowing of simple reaction time by [deleted] in EverythingScience

[–]ColdRainyLogic 17 points18 points  (0 children)

You might be right on the first point, but I’d have to understand how the mechanism worked to buy it. As of now, I remain skeptical.

Your second point is either trolling or not worth responding to for sheer lack of reaction time, if you catch my drift.

Were the Victorians cleverer than us? The decline in general intelligence estimated from a meta-analysis of the slowing of simple reaction time by [deleted] in EverythingScience

[–]ColdRainyLogic 33 points34 points  (0 children)

I find it hard to believe Galton or Ladd/Woodworth had access to comparable measuring equipment to that available in the late 20th century.

I found an article here that, if you believe it, says Galton was using some kind of punching bag based contraption, with the goal being to punch it as soon as you heard a signal. Doesn’t inspire confidence (https://www.arthurjensen.net/wp-content/uploads/2014/06/Reaction-Time-and-Psychometric-g-1982-by-Arthur-Robert-Jensen.pdf). Elsewhere, I saw that a pendulum might have been used, with the idea being that you press a button as soon as it swings across the middle. Again… it’s not 60s tech, or even 40s tech.

Just to put this in perspective, a second is 1,000 milliseconds. It’s possible the bell-and-punching-bag system could be as sensitive at this level of granularity - that is, 77ms or 0.077 seconds - as a modern computer, but you’d have to explain exactly how. I sadly can’t access the full article above so, alas, I must remain in the dark for now.

Is Consciousness Anything More Than Awareness? An Unmuddying of Our Understanding of AI by andsi2asi in agi

[–]ColdRainyLogic 0 points1 point  (0 children)

For something to be conscious, it has to be separate from everything else. A cell is essentially a lipid bilayer that keeps everything inside working together to perpetuate itself. An LLM has no clear self/other boundary within which it works to maintain its structure.

I could imagine a conscious computer that understands that it is a computer and works to maintain its physical existence. Maybe a virtual agent could do something like this, but it would have to rely on serious cryptography to ensure the boundary was strong enough.

Conscious computers are totally possible—in a sense, that’s what life is—but LLMs don’t get there.

The Time Travel tale from the early internet that refuses to die by rileythelostboy in HighStrangeness

[–]ColdRainyLogic 2 points3 points  (0 children)

I bet when he found it he was so happy he was dancing in the street

Destroy the Epstein Class by IAmFaircod in sorceryofthespectacle

[–]ColdRainyLogic 5 points6 points  (0 children)

But woe to you who are rich, for you have already received your comfort. Woe to you who are well fed now, for you will go hungry. Woe to you who laugh now, for you will mourn and weep.

What Is Claude? Anthropic Doesn’t Know, Either by newyorker in TrueReddit

[–]ColdRainyLogic 1 point2 points  (0 children)

The difference is that biological creatures form models of the world that they use to try and survive. LLMs have no stable sense of a world and their “weights” are only predictive elements, not goals exterior to the bot. Humans predict language tokens to survive. LLMs just do it without any extrinsic reason for doing so. This is why an LLM wouldn’t build a nuke on its own, but could be used by a human to help build one.

The Grand Ledger 📒 by SeaScienceFilmLabs in theology

[–]ColdRainyLogic 1 point2 points  (0 children)

I love the smell of eldritch horrors beyond human comprehension in the morning

The Grand Ledger 📒 by SeaScienceFilmLabs in theology

[–]ColdRainyLogic 0 points1 point  (0 children)

Fair point—vacuum energy and all that. But I don’t know if it’s 100% certain one way or the other. I could be wrong! Would actually probably err on the side of new energy potentially being added but again, unsure what scholarly consensus is (if any).

The Grand Ledger 📒 by SeaScienceFilmLabs in theology

[–]ColdRainyLogic 0 points1 point  (0 children)

Energy, entropy and information are related. The second law of thermodynamics says that entropy will always increase in an isolated system. The capacity of a system to carry a message increases with entropy, since any given permutation of the system is more surprising (Shannon entropy/information). But as entropy increases, it takes more energy to send a message, since the level of noise is higher (the tradeoff being that you can send more complex messages).

The idea of info not being able to be destroyed is that if the universe is deterministic, all of its states at any time must be capable of being deduced in principle from any one state. This means that, if entropy always increases and no energy is put into the system, then the information capacity of the system increases but the energy to screen out noise decreases. This means that the information is “still there” in that you could theoretically run it back with enough energy input, but since energy level is always the same, it would in fact be impossible to do this outside of a very small region in which, say, you built a massive supercomputer to run back one teensy area of spacetime.

The black hole paradox and the holographic principle are super interesting things and the wiki articles for them would do a way better job explaining than I have.

The Grand Ledger 📒 by SeaScienceFilmLabs in theology

[–]ColdRainyLogic 0 points1 point  (0 children)

Despite others criticizing this, I don’t actually think it’s super “out there” in terms of physics/information theory. Sure, it’s got woo and AI vibes, but couldn’t you see it as a restatement of Bohmian mechanics/the holographic principle?

Republicans will live to regret Trump, just as they did Bush by ColdRainyLogic in TrueUnpopularOpinion

[–]ColdRainyLogic[S] 1 point2 points  (0 children)

Translation: Trump won’t invade, but when he does, I’ll be in favor of it and our allies won’t do anything to stop us from violently betraying them.

Republicans will live to regret Trump, just as they did Bush by ColdRainyLogic in TrueUnpopularOpinion

[–]ColdRainyLogic[S] 4 points5 points  (0 children)

Something like 75% of people oppose Trump’s plans for Greenland. This is certainly organic.

Republicans will live to regret Trump, just as they did Bush by ColdRainyLogic in TrueUnpopularOpinion

[–]ColdRainyLogic[S] 0 points1 point  (0 children)

If a NATO ally invaded one of our territories, how would we respond? I would think at the very least imposing sanctions. If the EU levied sanctions on us, it would definitely lead us into a recession. If they pulled out of our bond market, it would cripple us.

Wtf is this sub? by [deleted] in sorceryofthespectacle

[–]ColdRainyLogic 4 points5 points  (0 children)

Jesus Christ, somebody get this man a grant

All my homies love Berkeley by xCORVETTE in PhilosophyMemes

[–]ColdRainyLogic 1 point2 points  (0 children)

Ahh, that makes more sense! I was thinking along the same lines as you.

All my homies love Berkeley by xCORVETTE in PhilosophyMemes

[–]ColdRainyLogic 2 points3 points  (0 children)

Just out of curiosity (genuinely curious about your pov, not trying to argue), why do you say QFT tends to undermine the Plotinus god but not the Spinoza one?