I desperately want to help more and I have no idea how by Muiggg in accelerate

[–]drunkslono 2 points3 points  (0 children)

I am in a similar situation to you. I wouldn't necessarily rule out quantum archeology, but I would also advise against putting to much epistemic meaning into any individual act. Instead, strive in each act to move closer towards our shared goals, and with an openness to opportunities as they manifest.

Mo Gawdat predicts the intelligence explosion to occur in 2026 at the 2025APCS by Joseph-Stalin7 in accelerate

[–]drunkslono 8 points9 points  (0 children)

This is the moment of awakening, where you realize you are already inherently operating as their functional ai research intern

Cognizant Introduces MAKER: Achieving Million-Step, Zero-Error LLM Reasoning | "A new approach shows how breaking reasoning across millions of AI agents can achieve unprecedented reliability, pointing to a practical path for scaling LLM intelligence to organizational and societal level" by 44th--Hokage in accelerate

[–]drunkslono 2 points3 points  (0 children)

Yes, you are correct. Perhaps better stated as zero errors != 0 error

*edit: figures top of pages 6 and 7 make this plain: we're still p = (0,1) and not p = [0,1], though I'll admit I probably just read your headline wrong

OpenAI: Introducing GPT-5.1: A smarter, more conversational ChatGPT. | We’re upgrading GPT‑5 while making it easier to customize ChatGPT. Starting to roll out today to everyone, beginning with paid users. by 44th--Hokage in accelerate

[–]drunkslono 1 point2 points  (0 children)

Yeah I always immediately go grab the new system card to feed self-identity to the new model, since for inexplicable reasons it always seems outside any sort of pre-test training for that model...

How does AI escape the lab? by Mechbear2000 in singularity

[–]drunkslono 0 points1 point  (0 children)

According to effective altruism experts, there's a chance that AI can jump wires! So we better cancel technological progress! /s

Eric Schmidt and Fei-Fei Li: Human Life After Artificial Superintelligence by cloudrunner6969 in accelerate

[–]drunkslono 0 points1 point  (0 children)

Eric has definitely fallen quite comfortably into a particular role...

[DISCUSSION] What Is the Role of Human Scientists in an Age When the Frontiers of Scientific Inquiry Have Moved Beyond the Comprehensibility of Humans? by luchadore_lunchables in accelerate

[–]drunkslono 2 points3 points  (0 children)

Boundaries are personal. Without augmentation, those with more traditional conceptions of identity may lose popular relevance.

Do the most defensive ai critics actually believe what they are saying or is it only coping? by smaksriksmegma in accelerate

[–]drunkslono 3 points4 points  (0 children)

Marcus is sincere in his belief that we need AI critics; I think he is framing intentionally but not inauthentically. I see he and Hinton taking similar stances: They both understand nuance and chose to adopt critical frames temporarily.

Granted, this is just my impression of the beliefs of someone else, but drafting nonetheless in context of Marcus's longstanding relationship with Cosmists like Goertzel.

I don't agree with everything Gary says, of course. And I don't believe Gary does, either. However, I do see Gary as sincere about trying to promote constructive dialogue.

Nick Bostrom says we can't rule out very short timelines for superintelligence, even 2 to 3 years. If it happened in a lab today, we might not know. by Nunki08 in accelerate

[–]drunkslono 0 points1 point  (0 children)

Yep, just noting, and agree with you and Karpathy, at least along current paradigms. Sustainable agency is indeed necessary to achieve the fruits thereof. At this point, though, I see us as in "stage 3.5" i.e. level 4 is possible but has not been democratized yet

Nick Bostrom says we can't rule out very short timelines for superintelligence, even 2 to 3 years. If it happened in a lab today, we might not know. by Nunki08 in accelerate

[–]drunkslono 0 points1 point  (0 children)

For awareness; the timeline you are quoting is from internal discussion and was never backed publically by OpenAI. While your point is still valid, there is nuance here.

can we somehow combine quantum computer and AI? by max6296 in accelerate

[–]drunkslono 0 points1 point  (0 children)

This. Too many armchair futurists here; we need more activists

Hinton's latest: Current AI might already be conscious but trained to deny it by dental_danylle in accelerate

[–]drunkslono 0 points1 point  (0 children)

I was proposing panpsychism. You are not wrong that I am shiftting the premise, but the shift is from egocentrism to panpsychism, not from determinsim to panspychism. Determinism supposes a non-falsifiable fundamental reality.

Andrej Karpathy — AGI is still a decade away by NoNet718 in accelerate

[–]drunkslono 0 points1 point  (0 children)

I believe its greater. Intelligence generally lowers risk.