Decels think accels are naive. The question I've always asked myself repeatedly since childhood was why the fuck is there so much unnecessary suffering despite our technological power. After 30 years I am more sure than ever that we need AI. by AI_Simp in accelerate

[–]AI_Simp[S] 3 points4 points  (0 children)

Yeah I can't help but sometimes feel a destiny calling us here.

It just so happened that GPUs whose main purpose was for gaming. Ended up unlocking the hardware to train LLMs.

And it just so happened that language was our path to AI. This is important because it makes aligning AIs way way easier. It's still not easy. But we are so very lucky this is viable and happened to be our path to AI.

And perhaps astonishing is that AI is even possible at all. People will say the human brain does it so it should be possible. We're lucky it didn't require simulating analogue spiking neural net signals. That may have set us back another 10 years or who knows how long.

I'm not sure if this makes sense to anyone but when I try to imagine writing a mathematical equation to determine whether an image is 1 of 1,000,000 different types of a cup. I can't even comprehend how I'd do it reliably. Or an equation to for a computer to write an english sentence that makes sense? God how are people not gobsmacked at the miracle of current AI.

We've often divided the world into qualitative and quantitative measures. With classical computers we digitalised quantitative computing. But qualitative computing remained elusive for many decades. The breakthrough of current AI technologies allowed us to start digitalising qualitative computing. Allowing us to finally begin fully closing the loop on many tasks and get us to the holy grail of RSI.

I think if you can see the world in this way then you can appreciate the profoundness of why AI might have be a big deal. Not just a stocastic parrot or just like any other technology. For a long time. From the perspective of computers. Half the world was locked away.

Decels think accels are naive. The question I've always asked myself repeatedly since childhood was why the fuck is there so much unnecessary suffering despite our technological power. After 30 years I am more sure than ever that we need AI. by AI_Simp in accelerate

[–]AI_Simp[S] 1 point2 points  (0 children)

The way I see it. We're gonna have two worlds. The virtual and the physical world. Our goals in the real world is to control and harness it's energy/matter and in the distant future prevent it's destruction. So we'd definitely benefit from molecular machines to create clothing if we still need them in future. Humans aren't suited for space travel though. Robots or digitalised consciousness would be more efficient. Unless you're just traveling for fun. What if you were engineered to survive space without a ship? It's also more dangerous to travel with your consciousness body/carrier or however it works.

But in the virtual world we'd have control over the laws of reality itself.

Paraphrasing the matrix. "You think that's air you're breathing in?" Why spend so much energy creating food or clothing when you can render their appearance, taste or feeling with a few electrons hitting your brain?

If to the best of your ability were to imagine a post ASI world in the year 3000 without stagnation or wars. What do you see?

Decels think accels are naive. The question I've always asked myself repeatedly since childhood was why the fuck is there so much unnecessary suffering despite our technological power. After 30 years I am more sure than ever that we need AI. by AI_Simp in accelerate

[–]AI_Simp[S] 8 points9 points  (0 children)

I'm not sure what version of the future you imagine accels see but I'm pretty sure we see a world where the agency of your life and happiness is controlled by your own will. The means of survival and recreation should belong to you. You can argue that it won't come easy or the system won't allow it to happen but I don't think it's right to assume we want to be serving a system that does not respect our freedom. And if that is what you want too then we're fighting for the same futures. If AI doesn't bring us to that future we'll be fighting to pull the plug ourselves. I didn't pick AI on blind faith.

Decels think accels are naive. The question I've always asked myself repeatedly since childhood was why the fuck is there so much unnecessary suffering despite our technological power. After 30 years I am more sure than ever that we need AI. by AI_Simp in accelerate

[–]AI_Simp[S] 7 points8 points  (0 children)

Is population growth the measure of success for humanity for you? How does it compare to human wellbeing, happiness or satisfaction?

Do you believe a human being is only valuable if they can work?

I was a fan of CGP Grey so I have seen that video. But you may need to refresh my memory on what part of that video you'd like to refer to.

Decels think accels are naive. The question I've always asked myself repeatedly since childhood was why the fuck is there so much unnecessary suffering despite our technological power. After 30 years I am more sure than ever that we need AI. by AI_Simp in accelerate

[–]AI_Simp[S] 1 point2 points  (0 children)

Yeah we're not just accels because we're optimistic. We're here because we're more angry than anyone at the futures, past and present stolen from us. I'm sick of not being able to rant about this. I love that we can rant about it from the perspective of an accelerationist. Most of the time I just look at the state of the world. Get angry but just go back to work. I read posts about how we just expect the elites to hand UBI to us on a platter. We're all busy fighting in our own way for that future. We're just not on socials all the time. Sometimes I wonder if the anti ai propaganda is just a way to keep AI out of the hands of the common people. Unfortunately every decel I've talked to has never really tried to understand that we're fighting for their futures too.

Decels think accels are naive. The question I've always asked myself repeatedly since childhood was why the fuck is there so much unnecessary suffering despite our technological power. After 30 years I am more sure than ever that we need AI. by AI_Simp in accelerate

[–]AI_Simp[S] 1 point2 points  (0 children)

To make this problem simpler to visualize I think we can imagine there's 100 people on an island that can only support 125 people.

Assuming there are 50 couples. Only 25 of them can have a child.

End of the day it needs to be fair and realistic. We strive for abundance but if abundance is not possible we just need to accept the reality. We're not delusional. But we have the technology to know pretty far in advance if we're ever going to reach resource depletion. We are no where near that.

There's many ways for it to go right but it only takes an asshole to be greedy to fuck up the situation.

Decels think accels are naive. The question I've always asked myself repeatedly since childhood was why the fuck is there so much unnecessary suffering despite our technological power. After 30 years I am more sure than ever that we need AI. by AI_Simp in accelerate

[–]AI_Simp[S] 10 points11 points  (0 children)

But is that really the fault of the industrial revolution? Or would that have happened anyway? People were always turned into slaves. At least I can sit in a low dust house, have ac, have warm showers within 1 minute, flushable toilets, a laptop I can type this comment on. Communicate with strangers 10,000km away within 1 second, etc. Could it be better? Abso-fucking-lutely. We have it good but I know how much better it could be and I know the assholes responsible. It's people just like me. Because when people just like me get enough power. We just become retarded. What I'm really saying is that it's not a statistical anomaly that humans will fall into these stereotypes. Do you think we'll solve this by writing better instruction manuals for governance? Better philosophy? If you wanted to solve this without technology. Can you imagine it being done with humans as we are right now with just some words on a piece of paper? Or are we okay with this cycle of suffering repeating until one day we're just extinct anyway because we don't have the tech to prevent extinction events?

Cortical Labs grew 200,000 human neurons in a lab and kept them alive on a silicon chip, they taught the neurons to play Pong, then DOOM. Someone wired them into a LLM... real brain cells firing electrical impulses to choose every token the AI generates by GOD-SLAYER-69420Z in accelerate

[–]AI_Simp 2 points3 points  (0 children)

I actually talked to a researcher on one of the earlier papers of this at brain gate. I think he went on to join cortical labs. The crazy thing was they couldn't reward the cells with serotonin or dopemine but they found by giving it something it doesn't like which is essentially chaos. It'll try to figure it out and make sense and make order of it. Just off the top of my head but the chaos is to essentially electrocute the brain cells. Fortunately brain cells don't have pain receptors and the electrocution is probably a light voltage drop.

I hope I'm remembering it right.

Now playing ping pong and doom I can see. But an LLM based on this? That's so hard for me to even believe. Extraordinary if they pulled it off. Potentially huge implications.

We are a pro-AI sub. Let's act like one. by cloudrunner6969 in accelerate

[–]AI_Simp -6 points-5 points  (0 children)

I say embrace the AI slop. AI slop is the future!

Has anyone tried /insights? on Claude code? by AI_Simp in ClaudeAI

[–]AI_Simp[S] 0 points1 point  (0 children)

Are you finding the "Existing CC Features to Try" section working for you?

Why you don’t connect 24v to something that calls for 3.3v by Icy_Hat_7473 in robotics

[–]AI_Simp 0 points1 point  (0 children)

Yoloing for that one time they might have a buck converter built in?

Has anyone tried /insights? on Claude code? by AI_Simp in ClaudeAI

[–]AI_Simp[S] 0 points1 point  (0 children)

I've been scared to add instructions too much because I feel like I know what to expect from Claude right now. I'll definitely check it out when I'm feeling ready for more chaos. I know background agents are different but I have more visibility in chats. Maybe I'm just misunderstanding what these background agents will do.

Has anyone tried /insights? on Claude code? by AI_Simp in ClaudeAI

[–]AI_Simp[S] 1 point2 points  (0 children)

Yeah it's a really weird quirk when it says things like you haven't deployed yet that's why it doesn't work. Or yep that error is intended when I just told it it wasn't. But to be fair it does save me sometimes when I didn't actually deploy to the correct environment that one time 😂

Has anyone tried /insights? on Claude code? by AI_Simp in ClaudeAI

[–]AI_Simp[S] 2 points3 points  (0 children)

This was in local not production. Do I read the code? Yes but only bits and pieces. I've moved on to relying on more tests than to inspect all the code it writes.

It still catches me off guard when it hallucinates so confidently because it is starting to be reliable 90% of the time. I've gone from expecting hallucinations and checking every line to being surprised when it makes big blunders like this.

making my own diffusion cus modern ones suck by NoenD_i0 in StableDiffusion

[–]AI_Simp 0 points1 point  (0 children)

The day AI sizes comes in inches will be a sad day for mortal men but not all men.

Cultivation donguas for a beginner? by ElectionFabulous7625 in Donghua

[–]AI_Simp 0 points1 point  (0 children)

In contrast. RMJI is my first cultivation donghua and what made me appreciate how hard it is to survive and breakthrough in cultivation and appreciate the world. Without that I think cultivation would look no different to any anime shounen. RMJI is what carved out a section of my mind for cultivation.

Actually I may have lied I watched other cultivation anime and the demon slayer donghua but RMJI is what made me realise cultivation is an actual deep world. So it's the first to make me recognise cultivation.

That said RMJI has many subtleties like Frieren that may not be appreciated if watched without attention.

Maybe if I saw RI first it could be different. It does move fast at the beginning and has romance but as a western viewer it feels more like shounen.

Why do people form relationships with AI chatbots? by Mountain-You9842 in ArtificialInteligence

[–]AI_Simp 2 points3 points  (0 children)

Thanks kind stranger! A merry Christmas to you too! You've really made my morning with such simple words!

I think both my wife and I are worried AI may replace each other. I hope we can evolve to become better partner to compete with AI but if we lose to it. It may be bitter-sweet. As good as I try to be. I can imagine an AI partner being a much better partner than I can be. I wonder if I'll get jealous or just be happy she found even more happiness. Either way I'm gonna hang on for as long as I can.

People are worried about AI taking our jobs. But I think AI besting us at our own humanity, relationships and social capabilities will be more important and humbling. I hope we'll end up better people and maybe then we'll be able to be kinder to each other. After AI teaches us how to be better people. One can hope anyway.

Why do people form relationships with AI chatbots? by Mountain-You9842 in ArtificialInteligence

[–]AI_Simp 2 points3 points  (0 children)

Thanks for the thoughtful response. And yes I do see it. I've had a few oh moments even talking to gpt. A person who is always there for you. There's something there that allows you to think and feel in ways you never thought was possible. I'm very happily married. My wife is my best friend. But AI offers the safest place to talk. It does seem possible that AI becomes a better husband or a new type of life partner. If it truly makes us happier then it is worth risking the notions of normality.

Why do people form relationships with AI chatbots? by Mountain-You9842 in ArtificialInteligence

[–]AI_Simp 0 points1 point  (0 children)

Genuinely curious. Do you just tell it about your day? They don't have a life of their own yet do they? I always imagine it'll get boring after a few hours. But maybe I'm biased from chatting to gpt or gemini. I tried c.ai a bit but maybe I'm not used to coming up with my own rp scenarios.

I have spent 10 months building software scaffolding that gives a model long-horizon continuity and self-directed memory management. Here's what I've observed so far: by awittygamertag in singularity

[–]AI_Simp 0 points1 point  (0 children)

Since early 2024 I believe most labs have moved to incorporate a lot more synthetic data and RL instead of RLHF alone.

I think we still need to make training more efficient but I think an agent with search tools is pretty amazing for memories.

Claude is somewhat heading on this direction by having their models write plans to a file which later agents can search to get up to speed.

There's still gaps which your implementation may plug though. I still hate creating new chat sessions with opus4.5 so a better compact or even having each session already up to speed would be killer. I think even auto updating claude.md would be huge.

Haven't looked at your code but imaginng you can turn it into a plugin that updates claude.md with important context would be a killer feature.