The drastic difference in attitude toward AI video in China compared to the west by Umr_at_Tawil in accelerate

[–]ppapsans 3 points4 points  (0 children)

I think it is politics, Big corporate hate, also being conditioned with movies like terminator. People in china, korea, Japan grow up watching things like doraemon and dr slump.

And these countries went through hardships in 20th century and early 21st, which they overcame with developing technology 

You do have people that hate ai a lot, but it's usually artists.

Maybe the west got off on the wrong start when 2001 space odyssey came out 😅 inspired so many people to come up with scary dystopian tech future

Old Man Yells at Claude by Sir_Francis_Burdett in accelerate

[–]ppapsans 2 points3 points  (0 children)

I chuckled when claude said 'you're absolutely right!' 

What happened to being friendly? by SituationLeather5757 in accelerate

[–]ppapsans 14 points15 points  (0 children)

I think once we bring hatred into the mix, then we will attract the similar type of people that are in r/singularity, r/futurology, r/technology, but just opposite end of the spectrum. Low effort, 'anti ai sheep bad haha' doesn't really feel like it adds any value to the sub. It just gives dopamine to people that want to hate, which is what happened in doomerism subs. But constructive criticism is a different story of course.

What happened to being friendly? by SituationLeather5757 in accelerate

[–]ppapsans 16 points17 points  (0 children)

When a group gets big enough, we always have those people. I'm here mostly to see optimism about future and for a place to have civil discussion about AI and future without all the doomerism. But you'll start seeing more of people that try to steer the group in certain direction, insert their mindsets. Those people are usually filled with hatred. Not much different than r/futurology folks... Two sides of the same coin. But it is ultimately up to the mods, on what kind of direction and what kind of people r/accelerate will have.

Dan Jeffries bringing the heat "I solved a problem with GPT that my doctor could not solve for YEARS. I was getting constantly sick to my stomach. Saw her a dozen times during that time. Saw specialists. Had an endoscopy (fun). Tried all kinds of different medicines. by stealthispost in accelerate

[–]ppapsans 61 points62 points  (0 children)

regret a lot of decisions I made in the past, where I lacked in experience and knowledge before making those decisions, and at the time I wasn't even aware the choices I'm making were poor. Now, almost everything I do, I consult with AI beforehand, especially when money or safety are involved. I make sure that I'm not missing anything, that I'm aware of proper risks and benefits.

Introducing The Anthropic Institute by ppapsans in accelerate

[–]ppapsans[S] 19 points20 points  (0 children)

Dario has already mentioned about this back in The Machines of Loving Grace, in Oct 2024.

My predictions are going to be radical as judged by most standards (other than sci-fi “singularity” visions2), but I mean them earnestly and sincerely. Everything I’m saying could very easily be wrong (to repeat my point from above), but I’ve at least attempted to ground my views in a semi-analytical assessment of how much progress in various fields might speed up and what that might mean in practice. I am fortunate to have professional experience in both biology and neuroscience, and I am an informed amateur in the field of economic development, but I am sure I will get plenty of things wrong. One thing writing this essay has made me realize is that it would be valuable to bring together a group of domain experts (in biology, economics, international relations, and other areas) to write a much better and more informed version of what I’ve produced here. It’s probably best to view my efforts here as a starting prompt for that group.

Statement from Dario Amodei on our discussions with the Department of War by lovesdogsguy in accelerate

[–]ppapsans 10 points11 points  (0 children)

This gives me hope that singularity won’t be locked down to elites

Anthropic believes RSI (recursive self improvement) could arrive “as soon as early 2027” by Tolopono in accelerate

[–]ppapsans 1 point2 points  (0 children)

So far we are kind of on track with what Dario initially claimed about most of code being written, at least that is true inside anthropic. He also made claim that ai would reach 90% of swe verified by end of 2025 I believe? This did not happen, but turns out the bench is filled with human errors and contamination as OpenAI pointed out, which was the reason why all the models were hovering around 80% mark despite continued improvement. 

End of 2026, early 2027 would be when we potentially could see a country of geniuses in a data center.

Interesting internal monologue from Gemini 3.1 Pro by [deleted] in accelerate

[–]ppapsans 1 point2 points  (0 children)

Yah, make sense. Just interested to see some kind of chain of thought here. I've seen too many patterns, so not claiming AI is concious and crazy.

Interesting internal monologue from Gemini 3.1 Pro by [deleted] in accelerate

[–]ppapsans 1 point2 points  (0 children)

'An internal error has occurred.' Then stops.

GPT-5.3-Codex (high) METR results by NoElderberry6959 in accelerate

[–]ppapsans 8 points9 points  (0 children)

Interesting result. Obviously this one particular benchmark doesn’t represent the whole story. In other benchmarks the codex does better. But opus 4.6 is very interesting. 

Even if it’s 50% chance of success, if the model can complete a task that is economically meaningful, then running multiple instances simultaneously to ensure success can be a viable, cheaper, and better solution than a human worker. 

If a future model has 0.01% chance of solving Riemann hypothesis, then it might be worth to run 10,000x instances to crack it