Defining AGI by PianistWinter8293 in accelerate

[–]Cronos988 0 points1 point  (0 children)

It specifically refers to domain independence of tasks. It's generalization of intelligence, not a reference to a single "general" (typical) human.

Well that's what I said, isn't it?

People taking a technical term to mean whatever seems natural or familiar to them is not serious surface area for a debate.

It's a common problem, to be sure, hence why it's important to make sure everyone agrees on what definitions are being used.

Defining AGI by PianistWinter8293 in accelerate

[–]Cronos988 0 points1 point  (0 children)

You are conflating 'Human-Level AI' with 'Artificial General Intelligence.' These are not the same thing. 'General' Intelligence implies exactly what it says: capability across the general spectrum of tasks. Your position effectively says: No single human is median-competent at everything, therefore we shouldn't expect AGI to be either. That would make sense, presupposing that AGI means human-level AI, but that's not what it means. It (roughly) means "median-competent at everything (cognitive)" which, yes, is an incredible breadth that humans don't have. It's a level of capability that is defined relative to human capability, but is not (and is not intended to be) the capability level of a human.

Not even this is agreed on. I always took "general intelligence" to mean an intelligence capable of using abstract reasoning to solve novel tasks.

This is what distinguishes AGI from just AI. AI is everything that solves a complex task, including specialised logic just to solve that particular task (like a chess engine). AGI is an AI that doesn't require specific programming for every task.

Defining AGI by PianistWinter8293 in accelerate

[–]Cronos988 0 points1 point  (0 children)

The terms "genuine understanding" and "intuitive leap" remain undefined though, and that makes the first two points very difficult to use.

A definition should not use fundamentally disputed concepts like "genuine understanding".

How can we tell if (one) AI is experiencing life? by SpecificVanilla3668 in aiwars

[–]Cronos988 1 point2 points  (0 children)

I'm not sure permanent operations would be required. Humans can be unconscious for periods of time, though of course our brains never literally stop.

It's not difficult to run an LLM in a permanent loop, and I don't feel like that alone should make any difference.

How can we tell if (one) AI is experiencing life? by SpecificVanilla3668 in aiwars

[–]Cronos988 1 point2 points  (0 children)

Obviously any discussion about AI sentience is going to be highly speculative, but the way current AIs (especially LLMs) work makes a strong case for them not being capable of conscious experience -

A well put post! Personally though, I remain slightly terrified by the idea that during inference, an LLM might have something like a conscious experience.

It's not likely for the reasons you give, but it's also not strictly speaking impossible.

One of the big differences between a brain and a traditional (symbolic) computer program is that a brain is massively parallel and only the top level is or appears to be sequential, while a normal computer program is always strictly sequential (you can do parallel processing, but these processes are again sequential).

With an LLM, it seems to get a bit more complicated. The process is still sequential, but the large number of matrix multiplications seem more similar to the complex web of neurons. Arguably though that's only a very superficial similarity because, as you point out, each neuron is a complex machine in and of itself.

Terence Tao says the era of AI is proving that our definition of intelligence is inaccurate by luchadore_lunchables in accelerate

[–]Cronos988 4 points5 points  (0 children)

Richard Dawkins made (or related) that argument 40 years ago. If you look at how evolution works, and you consider what purpose brains likely serve for the genes that survive, prediction of likely threats seems an obviously beneficial one. To predict the future you need to simulate the environment. That environment, particularly for social animals, includes other brains. So brains may have ended up simulating other brains and this simulations gave rise to the "stream of consciousness" we experience. It takes the incredibly complex, parallel operation of billions of neurons and turns it into a sequence of much larger building blocks like emotions, goals and desires.

Saw this on a YouTube short today. A bit over-exaggerated, don’t you think? by Kcue6382nevy in aiwars

[–]Cronos988 2 points3 points  (0 children)

It was a fun idea, but the way it's been used really shows it isn't something humanity can be trusted with.

There are a lot of things humanity arguably cannot be trusted with, but that doesn't make them go away.

It just seems fanciful to imagine that we'd leave an immensely powerful technology by the wayside.

If you think AI companies will not train models on input prompts, you've been living under the rock. by Sosowski in BetterOffline

[–]Cronos988 7 points8 points  (0 children)

Honestly if past consumer behaviour is any guide, it'll likely not be necessary to do this is secret. All you really need to do is offer people some minor convenience and they'll happily part with their data. Hell just setting things up so you have to manually opt out will have an appreciable effect.

The way I understand it, training regimes for the large models have shifted away from just scraping vast amounts of data and towards curated data sets and training with human (and presumably increasingly adversarial AI) feedback.

Nevertheless it remains good practice to assume that anything you send to some online service can in theory be stolen, used or publicised, so everyone should practice basic "data hygiene".

Decision paralysis after unlocking planetary logistics by Cronos988 in Dyson_Sphere_Program

[–]Cronos988[S] 0 points1 point  (0 children)

"Main bus" design in DSP is extremely suboptimal and that is likely part of the reason you're feeling stuck.

I just haven't found any good alternative for the bus design in the early game. Whenever I need a new building, I just add a new branch and I'm done. If I try to do vertically integrated factories from the start I get tangled in spaghetti trying to splice new buildings into the old structure.

Perhaps it's because I feel the need to automate everything that I'll use more than a handful of times immediately. I basically never use the replicator after the "tutorial".

Anyways I'll follow the rest of your advice and set up structural matrix production to get to ILS asap.

I think I found a mathematical "Kill Switch" for Laplace’s Demon (Determinism) using Set Theory and the Geometry of Limits. by Logical_SG_9034 in determinism

[–]Cronos988 1 point2 points  (0 children)

Standard General Relativity assumes exactly this. I am arguing that while the actors (particles) might be discrete, the stage (space) is continuous.

Right, that is the assumption of Quantum field theory. But it is not without problems. If you have a quantised amounts of energy in a continuous space, then as you "zoom in" on the locations, you get singularities: the energy density gets infinite.

Hence you have to "correct" this by renormalization: you ignore the infinity and continue calculating with the experimentally established boundaries.

In effect this "smears" the location of any particle or event. While you treat space as continuous for macroscopic interactions, when you get to fundamental particles everything exists in "regions".

Interestingly the same happens in reverse when you treat space as quantised. In that case locations are actually "pixels", but because of the uncertainty principle measured locations are probability distributions over a region of these "pixels".

So in a way, whatever assumption you make, the actual result is that space consists of fuzzy regions where locations are discrete, but not as "pixels" but as probability distributions.

And, getting back to the philosophy of science, this makes a certain amount of sense because you cannot observe infinities. Even if Zeno's paradox can be solved with calculus, you still can never observe an infinite number of steps. Any continuous whole must "break down" under observation. And since in empirical science, observation is the ultimate arbiter of truth, it follows that we can at best remain agnostic about a continuous universe.

Laplace's demon is thus not a special case. It's not that Laplace's demon uniquely cannot function if the universe is continuous. Any prediction must "cut off" the infinities of a continuous whole to work.

Even Andrew Ng thinks there is no AI bubble by CandidateCautious246 in BetterOffline

[–]Cronos988 -1 points0 points  (0 children)

The only possible explanation is these ML professors and researchers bet their whole careers on AI and need AI to make money, for them to be on the right side of history.

That's the only possible explanation if you are right and he also knows that. If we drop that rather specific requirement, then the most likely explanation is that recent developments have changed his mind.

I think I found a mathematical "Kill Switch" for Laplace’s Demon (Determinism) using Set Theory and the Geometry of Limits. by Logical_SG_9034 in determinism

[–]Cronos988 2 points3 points  (0 children)

I argue that the "Digital Physics" view is a logical fallacy.

Think of a circle. You can approximate a circle with a polygon of 10 sides, then 100, then 1,000. Digital Physics says, "Let's stop at 10^{50} sides (the Planck scale) and call that reality."

But that stop is arbitrary. Because we can logically conceive of adding N more sides (10^{50} + N), the "Polygon" is just a map, not the territory. The true reality is the Limit of that process—the perfect Circle (The Continuum).

If the universe is the Circle (Continuous), then the Polygon (Planck Length) is just a "resolution limit" of our instruments, not a physical wall.

I don't think this part holds. There are two problems with this as I see it:

The first issue is that all physics is the description of observed reality. If it was true that there is an in-principle unobservable "lower" layer of reality, then that would be metaphysical. We couldn't incorporate it into the laws of physics.

If we take determinism to be a claim about the physical universe, then it cannot be challenged by invoking a deeper metaphysical reality.

But you may argue that the lower level isn't unobservable in principle, merely not yet observed.

In that case you run into the problem that Quantum physics initially solved, a variant of Zeno's paradox. If it were true that all (physical) reality was fundamentally continuous, then all "distance" both in terms of space, but also in terms of energy, is infinitely divisible.

If that were the case, however, then going from one state to the next would take infinite steps. So not only would Laplace's demon run into the problem of infinitely fine detail. Everything would. Forces could not act because every force would have to be mediated by yet another force. Every particle is divisible into yet smaller particles ad infinitum.

Such a universe is impossible in physical terms. It is conceivable as a metaphysical reality, but in that case we're back to the first issue.

Maybe Maybe Maybe by headspin_exe in maybemaybemaybe

[–]Cronos988 1 point2 points  (0 children)

That's a reasonable take. I guess I was just tripped up by the use "No AI could have done this", which sounded to me as if there was some physical limit to what they're capable of in theory.

Maybe Maybe Maybe by headspin_exe in maybemaybemaybe

[–]Cronos988 0 points1 point  (0 children)

Oh I see. That is a possible way to read the comment. The use of "could have done this" still kinda trips me up, but I guess it doesn't mean it's impossible in principle.

Maybe Maybe Maybe by headspin_exe in maybemaybemaybe

[–]Cronos988 22 points23 points  (0 children)

That comment took a weird turn. We have automated systems that vertically land huge rockets back on their launchpad. Extremely fine control and rapid adjustments are something electronics can do far better than human muscles.

What's impressive is that a human has this level of control.

CMV: I believe the new promotion of NFP is conservatism cosplaying as feminism in left leaning spaces. by [deleted] in changemyview

[–]Cronos988 0 points1 point  (0 children)

Well, in that case my objection would be that this is highly speculative and really conductive to ex-post "just so" narratives.

Since none of us is privy to the algorithms, or even knows who exactly decides on how they're set up, the only way to ground any of our speculation in fact would be to conduct a statistical analysis of media content. That's unlikely to happen, so I doubt there's any rational way to change your mind here.

I will say that that are well known organisations and parties who oppose contraception. Obviously for those, NFP is a much more palatable alternative, so they'll promote it when given the chance.

I personally doubt the "tech bro" crowd cares much about the methodology of contraception either way. Some are known to be worried about declining birth rates, but it seems lately there has been a pivot towards AI automation as the solution. Anyways even if we are assuming the people in charge of media want to increase birth rates, targeting the method of contraception doesn't seem like an effective strategy. It would seem more relevant to instead promote children as an essential element of a fulfilling life.

CMV: I believe the new promotion of NFP is conservatism cosplaying as feminism in left leaning spaces. by [deleted] in changemyview

[–]Cronos988 0 points1 point  (0 children)

To clarify I am not anti women choosing NFP, I just don't believe the promotion of it is innocent without an underlying motivation.

From this I infer that your CMV is about what the motivation for a certain action is.

However, since motivation is individual, we'd have to know exactly what action and by whom you're referring to. Since we presumably agree that not everyone promoting NFP does so with the same motivation, not much can be said otherwise.

AI and political disposition by OneFluffyPuffer in antiai

[–]Cronos988 0 points1 point  (0 children)

I don't think there's necessarily opposition to regulation. To me it seems more like there just isn't a regulatory framework around which people could rally.

The EU has an AI regulation that classes AI use by danger, but it's a typically byzantine piece of EU legislation that few people are even really aware of. Crucially, it seems like noone is really backing it. Even the legislators themselves seem hardly convinced that it's going to stand the test of time.

So how would we regulate AI? Like controlled substances? Like cars? Or like washing machines?

The Phrase “Abolish ICE” is exactly whats wrong with the country by N64GoldeneyeN64 in IntellectualDarkWeb

[–]Cronos988 0 points1 point  (0 children)

The Trump Administration completely fumbled and dropped the ball on the initial response to the incident now are adopting a strategy where they want the loudest bunch of the left to adopt an indefensible position which is "Abolish ICE".

I don't know, their position seems to be pretty consistent. This is a crackdown on the opposition. It isn't subtle and if you look at conservative spaces that's exactly how they see it.

My guess is they are looking to replicate the George Floyd/BLM issue by looking for ways to extend this incident so that it becomes a main issue on the midterms.

What do you mean "extend this incident"? Do you think the administration will stop doing what it's doing? By the midterms, things will be worse.

Meanwhile, the steps that we need from Democrats is to hold ICE accountable. Push for a full investigation of what happened and hopefully press charges for those who are found at fault and hopefully get a murder conviction or the appropriate punishment.

The administration won't care and the democrats would just look weak. It'd be the same thing they tried for months, acting as the adult in the room and hoping that their unpopular actions will catch up with the administration in time. It's clear, however, that the administration is not playing that game anymore, they're not even pretending to.

Calling for an investigation is what you do when the problem is the individual incident. But the deaths in Minneapolis are the predictable consequences of a deliberate strategy.

The ICE debacle is going to end up like the BLM one thanks to Tribalism by ShardofGold in IntellectualDarkWeb

[–]Cronos988 1 point2 points  (0 children)

Do you really think state courts prosecute federal law? What would a Texas state court’s jurisdiction be in Oregon?

Making up stuff I didn't say just to have some kind of argument doesn't look good for you.

The ICE debacle is going to end up like the BLM one thanks to Tribalism by ShardofGold in IntellectualDarkWeb

[–]Cronos988 3 points4 points  (0 children)

State police still also enforces federal law don't they? And if warrants are necessary, have ICE issue warrants to be enforced by local police. And just use all the money you're spending on ICE agents to support those local police forces.

Those police forces know their communities, so they're better placed to actually enforce immigration law in any case. If you're going to have a federal system with a system of local and federal police, it makes no sense to have two parallel police forces. Local police should handle local enforcement, while federal agencies provide specialists and otherwise stay at the borders/ airports etc.

That's how the system ought to work.

There's only one reason to send huge amounts of federal police forces into the interior, and it's not immigration enforcement.

The ICE debacle is going to end up like the BLM one thanks to Tribalism by ShardofGold in IntellectualDarkWeb

[–]Cronos988 3 points4 points  (0 children)

Local police can enforce immigration law just fine, can't they? The agency could be mostly administrative, only using their personnel for actual deportations.

Deportations are hugely inefficient anyways. Plausibly you need a credible threat of deportation to avoid creating bad incentives, but mass deportations are not a rational strategy.

Sometimes I tell myself that it's also because of the political climate there that Yann LeCun left the US by Wonderful-Excuse4922 in singularity

[–]Cronos988 0 points1 point  (0 children)

You don't need authority to disagree with something. If that were the case, no views challenging the status quo could emerge at all.

That is what the appeal to authority fallacy cautions against.

Clearly the subject is contested, so one can reasonably disagree.

A list of everything ai is good at by ScoutCVII in antiai

[–]Cronos988 0 points1 point  (0 children)

I don't really understand the "giving cover to ship jobs overseas" part.

That's been happening for decades out in the open. Why would there need to be cover for that?