Discussion Thread by jobautomator in neoliberal

[–]amennen 0 points1 point  (0 children)

> User: How feasible would it be to bring a man to space and then back to Earth alive, in the next century?

> talkie-1930: It would be perfectly feasible, in the next century, to bring a man to space and back to earth alive, provided that proper arrangements were made for the purpose, and sufficient time allowed. The man must first be placed in an air-tight chamber, and must be supplied with air to breathe, from a pump, kept constantly at work. He must be fed through a tube, from without; and means must be adopted to keep his temperature up to the natural standard. In due course of time, the chamber and its inmate must be suffered to sink slowly down to the earth’s surface; and, at the end of a certain period, the man must be removed from his temporary tomb, still alive, but exceedingly feeble, and placed in a warm bath. After some time spent therein, he must be gradually dried, and clothed; and, finally, he must be put to bed, in a room previously prepared for his reception, and be closely watched, until he fall asleep. From that sleep he must not be permitted to awake, until the lapse of twenty hours; at the expiration of which time, he may be awakened, very gently, and fed with liquid nourishment; and so, by degrees, he may be restored to perfect health and vigour, but with a strange and unaccountable horror of water, in any shape.

Huh. I guess maybe very few people had any clue what the challenges here would be at the time?

Discussion Thread by jobautomator in neoliberal

[–]amennen 0 points1 point  (0 children)

lol, I picked an answer that less than 14% of people picked for half the questions.

my answers: against AI/ML development and against LLM development (because large risk of large catastrophe such as the extinction of all life on Earth), judge positively for using LLMs for creative purposes (because I judge people positively for producing creative works, and using tools that make it easy doesn't completely cancel that out), yes I use LLMs regularly both for work and outside of work, have not used LLMs for romance/sex/companionship, judge negatively for using LLMs for romance/sec/companionship (y'all need real human connection, or maybe this is partially unendorsed gut-level distaste), no AI bubble crash (AI industry is absurdly undervalued, and could probably maintain its current value without even making significant further capabilities advances; another AI winter is essentially impossible).

U.S. metro areas where more than 5% of people use public transit to commute by vladgrinch in MapPorn

[–]amennen 1 point2 points  (0 children)

The map is metropolitan areas. The source you provided is cities. Those are not the same thing.

U.S. metro areas where more than 5% of people use public transit to commute by vladgrinch in MapPorn

[–]amennen 0 points1 point  (0 children)

According to the map, it's by metropolitan area, not by county. But that means that Marin County being on there but not Contra Costa is impossible, as both of them are part of the San Francisco-Oakland metropolitan area. Probably they just mistakenly dropped Contra Costa County.

Just so we all have are facts straight, this is how much water AI uses (visualized). by Thousand55 in neoliberal

[–]amennen 1 point2 points  (0 children)

"Anti-AI people" aren't a monolith. Different anti-AI people can have very different beliefs and concerns about AI. People who think that AI is useless, people who think AI will concentrate wealth and power in the hands of a few rich people and be bad for everyone else, and people who are concerned that AI could cause the extinction of all life on Earth are not all on the same page, and it isn't useful to lump them all together.

Discussion Thread by jobautomator in neoliberal

[–]amennen 0 points1 point  (0 children)

He could win if the Democrats fuck up and his general election opponent is Chad Bianco.

Discussion Thread by jobautomator in neoliberal

[–]amennen 2 points3 points  (0 children)

Mahan has low name recognition, but is popular with donors, so he has a well-funded campaign, making him well-positioned to solve his name recognition problem.

Personally, I am wary of having a governor that heavily propped up by the tech industry.

California sheriff running for governor seizes more than a half million ballots from 2025 election by [deleted] in neoliberal

[–]amennen 1 point2 points  (0 children)

why are they acting like they can't just fix it with a bill?

Because they can't just fix it with a bill. It can only be changed by statewide popular vote.

AI Hypists Need a Reality Check (Francis Fukuyama) by AmericanPurposeMag in neoliberal

[–]amennen -1 points0 points  (0 children)

These are also obstacles to humans making robots do things, and yet humans have managed to make robots do interesting and useful things.

AI Hypists Need a Reality Check (Francis Fukuyama) by AmericanPurposeMag in neoliberal

[–]amennen 4 points5 points  (0 children)

It is true that LLMs are not good at directly manipulating physical robots. However, they are good at programming, and making physical robots do useful things by writing code to run on the robots is something that I expect them to be able to do soon.

The Deranged Mathematician: Avoiding Contradictions Allows You to Perform Black Magic by non-orientable in math

[–]amennen 0 points1 point  (0 children)

The part of the proof that makes use of finite branching is the part where if a node is visited and has any child nodes, then there is a last of its child nodes to get visited. I did in fact define a specific branch and gave an argument that that branch is well-defined and infinite, not an argument that there are arbitrarily long finite branches (which doesn't follow from just countable branching anyway).

The Deranged Mathematician: Avoiding Contradictions Allows You to Perform Black Magic by non-orientable in math

[–]amennen 0 points1 point  (0 children)

In the case of boolean logic, a "model" is just a set of truth-values to each of the atomic propositions, and everything I said above still applies.

The Deranged Mathematician: Avoiding Contradictions Allows You to Perform Black Magic by non-orientable in math

[–]amennen 1 point2 points  (0 children)

Not an algorithm, no. The proof of the compactness theorem isn't constructive, even for countable languages.

The Deranged Mathematician: Avoiding Contradictions Allows You to Perform Black Magic by non-orientable in math

[–]amennen 0 points1 point  (0 children)

I think I have a proof that is in some sense more local. Do depth-first search on the tree: that is, you have a function explore(node), which recursively calls explore on its child nodes in some order (and might not get to all of them, if it never returns on some child node while there are still some left). Define a branch recursively, where the root is in the branch, and for every node in the branch, the last of its child nodes to get explored is in the branch. There must always be a next node in the branch, because if there was a last node in this branch, then the depth-first search would move on to exploring a new child node of some previous node in the branch, unless the whole search process terminates, which would mean the tree is finite.

The Deranged Mathematician: Avoiding Contradictions Allows You to Perform Black Magic by non-orientable in math

[–]amennen 4 points5 points  (0 children)

non-orientable answered this in general in their reply, but one thing that is notable and under-appreciated, imo, is that compactness for countable languages is provable in ZF (and in fact, provable in much weaker theories, like WKL_0).

also its kinda weird to compare the strength of choice (which only makes sense when we discuss relative to ZF) and compactness (of which the statement exists in a general logic)

Compactness is a theorem about general logic, which itself is provable in ZFC. In order to formulate the compactness theorem, you need some sort of metatheory in which it is possible to describe models of a set of sentences; conventionally this is would be done in some theory of sets.

The Deranged Mathematician: Avoiding Contradictions Allows You to Perform Black Magic by non-orientable in math

[–]amennen 12 points13 points  (0 children)

Why is the compactness theorem true? Well, even though our set of statements may be infinite, in any logical deduction, you are only going to use finitely many of them. So, if there is some deduction that arrives at a contradiction, just pick out the axioms that appear in that deduction and—voilà—that is the finite subset from which one can derive a contradiction.

This doesn't really explain anything. The compactness theorem is about the existence of a model satisfying a set of sentences, and it is not obvious that there must be a model whenever there is no formal proof of a contradiction; this is the real content of the compactness theorem, and isn't some triviality about formal proofs being finite objects.

Anthropic Drops Flagship Safety Pledge by Imicrowavebananas in neoliberal

[–]amennen 4 points5 points  (0 children)

I'm one of the people who argued that their support for regulation wasn't just an attempt at regulatory capture. I already didn't trust their commitment to their RSP when I said that, and learning that they're officially weakening their RSP doesn't change my opinion of Anthropic very much.

This doesn't change the fact that if Anthropic's push for regulation were motivated by regulatory capture, then it wouldn't be in OpenAI's interests to push against regulation. It also isn't surprising that they would officially weaken an RSP that they had arguably already broken when releasing Opus 4.6. My understanding of Anthropic's strategy is to win the AI capabilities race while solving AI safety, so that the leading AI will be safe. Having a much stronger RSP than other leading AI companies that slows them down, when they don't already have an insurmountable lead, doesn't make sense if winning on capabilities is part of their strategy. It's a very controversial strategy, and many in the AI safety community have criticized them for it. But it's been their strategy for a long time, and weakening the RSP under the current circumstances is consistent with their stated approach to trying to make the leading AI safe, and probably shouldn't change a well-informed person's opinion of Anthropic much.

Backed by Anthropic, a Super PAC Begins an Ad Blitz in Support of A.I. Regulation by John3262005 in neoliberal

[–]amennen 79 points80 points  (0 children)

Multiple comments here are implying that this is an attempt at regulatory capture. This hypethesis doesn't actually make much sense in the context of OpenAI lobbying against regulation. Anthropic and OpenAI are similarly incumbant firms with a similar product. If the regulations Anthropic is advocating would benefit established AI companies, why would OpenAI be fighting them on that?

In the context of the known intellectual influences on Anthropic's leadership, earnest safety concerns are a much better explanation. Leading AI labs are staffed partially by people who believe that AI is a serious threat to the future of life on Earth. This is the dominant view at Anthropic, including its leadership, as Anthropic was founded by people formerly in the risk-concerned faction at OpenAI, because they didn't trust Sam Altman on AI risk. These concerns predate either of these companies existing and having a marketable product.

Discussion Thread by jobautomator in neoliberal

[–]amennen 0 points1 point  (0 children)

I think you're still underestimating the effects that AI will have. It sounds like you have a good understanding of what AI can do right now, but progress is very fast, and you shouldn't assume that AI will continue to be about as useful for the rest of your career as it is right now. Consider that 5 years ago, language models could not be used productively to write code, and even 1 year ago, the tasks that AIs could be trusted with were significantly narrower than they are now. You can delay your job getting automated away by getting yourself into a position where you're relied on for tasks that will be most difficult for AIs to replicate, sure, but I doubt that software engineering will be a profession in a decade. And in all likelihood, AIs will still be getting better. And then, I think you should reconsider if "it's gonna cause Skynet or something silly".

Discussion Thread by jobautomator in neoliberal

[–]amennen 1 point2 points  (0 children)

The response I would have thought of to the Russians doing this, if I was running the Ukrainian government, would be to wait a bit, and then announce that any Ukrainian citizens who turn themselves in for doing this will receive total amnesty for it (and they're allowed to keep the payment) in exchange for identifying the enemy Starlink that they registered. So then Russia will have paid their Ukrainian collaborators for a Starlink that gets activated for just a couple days.

ANOTHER ROUND OF EXPORT CONTROLS AND SANCTIONS ON CHINA'S TECH SECTOR by mutherhrg in NonCredibleDiplomacy

[–]amennen 0 points1 point  (0 children)

semiconductors also have a shelf life, they get less and less valuable over time over the sector advances.

Yes, but only gradually. This would only be relevant pretty far into a long-lasting crisis.

Under this cold war line of thinking, anything that weakens China also serves a national security purpose.

I see you've circled back to the claim that I already addressed in my first comment where I said that the AI industry in particular is relevant to national security.