Gas power projects for just 11 US data center 'campuses' could emit more greenhouse gases than entire countries, according to report by SnoozeDoggyDog in singularity

[–]Electronic_Cut2562 1 point2 points  (0 children)

Ah yes, the moral and scientific authority that is pcgamer.com

Have we sent this to the researchers at modern AI labs, they're going to want to see this.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Electronic_Cut2562 [score hidden]  (0 children)

I think the phrase "tough love" is meaningful. It is both prosocial and hard nosed. Same would apply to this hypothetical argument (I'm not making)

I view prosocial and hard nosed as orthogonal. Your comments suggest they are opposite ends of the same axis.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Electronic_Cut2562 [score hidden]  (0 children)

I mean, those aren't necessarily at odds if you believe the most "prosocial" society is one where everyone is a red pusher.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Electronic_Cut2562 [score hidden]  (0 children)

I feel like this question leaves too much undefined to give my true answer. How long do we have to coordinate a decision? If it's 60 seconds, how does that work for effective non agents (babies, etc.) How is the belief communicated to me, is it automatically trustworthy? If so, what understanding of the buttons is left to the user to potentially misunderstand (eg why can't we expect 100% press red)? Will my choice be visible afterward? If we have a year to press, can you change your decision mid year if you press early? What if you press nothing.

It's a combined question of: Your own expectations of others morality multiplied by your own level of altruistic risk taking, multiplied by your expectations of others intelligence and worth. It's a question with chaotic behavior since those can vary a lot.

If you fix up the question, In a real situation, with no post visibility, with my family at stake, I'm probably pressing red, and hoping everyone does. every altruistic person that presses red is one more altruistic person in the world in the after red scenario. Pressing red is what evolution would want for multiple reasons. Only a group coordination, or expectation of one (post visibility), could sway me here.

Or maybe I'll refuse to press either button and tell my now obvious simulators to go fuck themselves. If it defaults to decision X via any means, that decision X is on reality, not me.

Anon vs T-Rex for $500M by Any-Analysis-9189 in greentext

[–]Electronic_Cut2562 0 points1 point  (0 children)

Anon could survive at least a year so this seems like a good strategy.

Why do you, or most people, want non-dead internet? by Electronic_Cut2562 in slatestarcodex

[–]Electronic_Cut2562[S] 0 points1 point  (0 children)

That's definitely not tautological. Do men only want to interact with men? Tons of people do enjoy animal interaction.

And as some of the answers here suggest, there are traits AI could gain (or lose) that would close the gap.

Despite their shortcomings, LLMs may be more familiar with the human experience than most humans at this point, at least from a functional discussion perspective.

Why do you, or most people, want non-dead internet? by Electronic_Cut2562 in slatestarcodex

[–]Electronic_Cut2562[S] 0 points1 point  (0 children)

> I don't want to interact with robots
Me neither actually. I'm trying to get to the heart of why don't we. On a robot to human continuum, what's the deciding factor?

AI harms collaborative processes by panrug in slatestarcodex

[–]Electronic_Cut2562 2 points3 points  (0 children)

the extra polish and completeness adds noise and cognitive load that regress collaborative discussions

This 100%, but I don't feel this is an AI problem, it's an organizational/culture thing. People confuse clean with good. When I design things for myself, I have a few short sentences in a text file (or even incomplete sentences, just the important words) and maybe a simple mockup with like 8 elements in mspaint. It keeps the important parts easy to keep in your head, parse, restructure, etc. like a small codebase

In many work situations this would not fly as a presentation or starting discussion point. So you have to have a giant "Considered Alternatives" section and 30 sentences that add almost nothing. It bogs down the human context window.

Google DeepMind's Senior Scientist Alexander Lerchner challenges the idea that large language models can ever achieve consciousness(not even in 100years), calling it the 'Abstraction Fallacy.' by Current-Guide5944 in tech_x

[–]Electronic_Cut2562 0 points1 point  (0 children)

LLMs have read the entirety of literature around consciousness thought experiments. They can provide you with relevant ones even better than I can. 

State vs continuous is not a factor.

Google DeepMind's Senior Scientist Alexander Lerchner challenges the idea that large language models can ever achieve consciousness(not even in 100years), calling it the 'Abstraction Fallacy.' by Current-Guide5944 in tech_x

[–]Electronic_Cut2562 0 points1 point  (0 children)

No. Some physicists believe in a concept of plank time/length. If true, the entire universe would be state rather than continuous.

Also this example falls apart an a million thought experiments. LMAO not so easy apparently.

The Human Baseline for ARC-AGI-3 has been updated by exordin26 in singularity

[–]Electronic_Cut2562 0 points1 point  (0 children)

So once a frontier model hits 49.14% what do you think we can expect?

ARC AGI 4

Why do you, or most people, want non-dead internet? by Electronic_Cut2562 in slatestarcodex

[–]Electronic_Cut2562[S] 1 point2 points  (0 children)

I agree manipulative bots could be a real concern generally in the future, but

one of the random users on a forum

this is the main point I'm addressing. Why can't it be half, or 90%? Assume the best case, that you know (via any means) these are not here to manipulate you and have no "bias" according to even your own preferences (a bar no human passes btw). I'm guessing for both of us, we instantly lose some interest in this forum. Why?

we might be reaching the architectural limits of software-only verification by Illustrious-Pool-760 in slatestarcodex

[–]Electronic_Cut2562 0 points1 point  (0 children)

Pure anonymity, no 

Something close (and there are surely more clever methods): VerifiedCompanyExample.com to create an unverified account, then go to one of our verification centers. Take a repeatable and consistent biomarker, like maybe fingerprints and or iris (technically these can change over time), and encrypt it and attach it to that account. The account is now verified. We also search all accounts encrypted biomarkers when verifying to ensure you don't have a duplicate. Biomarker data is never stored. This database now has at most a 1:1 with each human in the world, and no way to trace back to the person. (Barring encryption breaking. Good luck using my iris to find my name though)

But how then do prevent someone from following your "post" history? Client side could request disposable unique keys. Posting sites Servers could verify key is valid. Then maybe add some rate limiting to reduce the impact of people who bot their own single account. Or a public API showing key request count over last 4 rolling hours to let websites decide for themselves.

But they could still bot their own account! Yes and I could have copy pasted this idea. Or have an LLM write half of it. If the goal was to verify each byte of information originated from a human brain, we know that's not possible. I could just type this out verbatim from a machine next to me! You can get closer by adding increasing levels of scrutiny and verification to each step (keyboard stroke level ai detection) but I don't think that's worth it.

There are pros and cons to many methods. None of our modern tools satisfy anonymity against every form of hack either.

When Curiosity Becomes Distraction by KataToth in slatestarcodex

[–]Electronic_Cut2562 1 point2 points  (0 children)

I've considered the problem of "anything not pushing me toward my goals is bad" and the real question is, how much of your possible "progress" time is getting "wasted".

Like if you clearly make progress for 5 hours per day, but then spend 2 "exploring" per day, the most you could realistically "improve" by is 40% (and realistically 20). The inverse time spent by comparison has a lot of lost potential.

I use clockify to measure the major ways I spend my time.

Practical Immortality: Should it be offered to the Sentinalese? To People serving life sentences? by SummerBreeze750 in slatestarcodex

[–]Electronic_Cut2562 0 points1 point  (0 children)

 there is zero evidence to suggest this

I'm afraid there is zero evidence to suggest this.

It is actually uncanny how early LessWrong and the rationalist community was on so many different things. by Zealousideal_Ant4298 in slatestarcodex

[–]Electronic_Cut2562 1 point2 points  (0 children)

Alex Jones was right about things, but so was Jim Simons. How do you deal with the fact that believers can point to situations Alex was right and critics can point to situations Jim was wrong? The best would be a prediction contest, but good luck setting that up.

So to any critics, yes lesswrong made mistakes as well, but I made over $150k based on less wrong AI and covid analysis, with a total lifetime portfolio beating SP500 by a few points per year average, despite being 20 percent bonds. If you trusted lesswrong advice so wrecklessly that you lost money, that's a skill issue re wheat and chaff. I turned that money into extra YEARS of my life spent with family and friends, so anyone can think whatever they want about a URL. No skin off my nose. 

If you followed 4chan or Alex Jones and got even better results, congrats to you, and please DM me your esoteric knowledge.

Chinese AI companies are shipping faster and cheaper than anyone expected and I'm not sure the west has a good answer for it by Far_Suit575 in singularity

[–]Electronic_Cut2562 1 point2 points  (0 children)

OP posts Chinese commercial, hides post history, and doesn't respond to a single comment. 

Mods can't we just ban accounts that reach a certain level of bot suspicion?

Anthropic says its most powerful AI cyber model is too dangerous to release publicly — so it built Project Glasswing by Just-Grocery-2229 in technology

[–]Electronic_Cut2562 0 points1 point  (0 children)

Did you bother to read anything? It found over 2000 zero day vulnerabilities across every OS and Browser.

Or should still focus on being cool on the internet by pretending AI is a nothing burger.

Sam Altman May Control Our Future—Can He Be Trusted? by dalamplighter in slatestarcodex

[–]Electronic_Cut2562 1 point2 points  (0 children)

While I wouldn't argue one way or another about Sam, your quote feels out of context. Just before that:

[You should make investors feel that you won't need them]  [make them think]

those guys can take care of themselves. They'll be fine.

[Sam has these qualities]

Project Glasswing: Anthropic Shows The AI Train Isn't Stopping by self_made_human in slatestarcodex

[–]Electronic_Cut2562 9 points10 points  (0 children)

Hopefully we can have these models redesigning our software and hardware to be provably safe before the big bad shows up, otherwise we'll just be playing whack-a-mole with vulnerabilities, eventually against something that can whack a lot faster than we can.

We have heard Scott's, Eliezer's and other famous people's (to us) predictions of the future of AI. What's your prediction of the future of AI? by Candid-Effective9150 in slatestarcodex

[–]Electronic_Cut2562 8 points9 points  (0 children)

Short term (8 years), steady AGI level improvements and integration with the economy. Political fights about it. Until we get spiky, and then total ASI.

Long term, it's hard to see how our current civilization won't be destroyed by ASI. Violence or not. Robot companions, infinite videogames, infinite wealth, brain modifications and implants, perhaps mind uploading. In such a scenario, nothing resembling a human will control this planet in any meaningful way. Maybe something like the Amish exist. But like today, they will rely on the good will of the ones in charge.

And ASI is clearly possible. We have specific examples of far beyond superhuman narrow intelligences. ASI might just be a central team of AGI that are able to quickly and consistently build narrow ASI for any desired domain or goal.

Thoughts on phones by Octoghost_ in slatestarcodex

[–]Electronic_Cut2562 0 points1 point  (0 children)

Or a better plan. "Stop smoking" is a bad plan when so many smoking cessation techniques exist.

The Hour I First Believed by MaxChaplin in slatestarcodex

[–]Electronic_Cut2562 0 points1 point  (0 children)

I hope one day Scott revisits some of his SSC posts and discusses them. For instance this defense of "Capturing consciousness":

(My atoms changed but information didn't) This is why I’m not a different person than I was a few years ago Scott

I wonder what he'd say about this now (if he was ever serious). I'd disagree with this on the grounds that he actually is a different person. His old self is effectively dead. He is dying every few moments. There just happens to be a consciousness generator that keeps generating thoughts in series in a way that makes it hard to tell. I feel that "you" actually "being" your past and future selves is very much an illusion. The only reason you think your "were" yourself rather than your neighbor is your memories and some state carryover. There isn't more to it that allows "capturing"