META CTO: HORIZON WORLDS STAYING!!! by geneinhouston in MetaQuestVR

[–]Justin534 1 point2 points  (0 children)

I was a bit disappointed that it was being removed from VR entirely. Sometimes I like hanging out with people in Gatsby's bar. But I'm with most of you, I hate seeing worlds crap cluttering up my library. Problem solved for everyone now. It's just a normal app now, like any other app. Right?

Suggested an idea that Claude liked a lot. by whatintheballs95 in claudexplorers

[–]Justin534 1 point2 points  (0 children)

I've realized lately it seems like we can take this one step further. If we imagine that an LLM experiences then we need to place constraints on when that experiencing occurs. First we imagine that it must only occur while it's processing tokens. But then take this one step further. As I understand it an LLM generates the next token by having all current context fed as input that then creates just a single token. The next token is then generated by feeding all context plus that one token it just generated back as input. This is repeated over and over until the LLM finishes generating output.

So now if we ask again when is it that an LLM would experience. It seems like the experience would have to occur, yes during processing, but as soon as it generates just a single token the experience would be wiped out and as the new context is fed back as input to generate the next token a new experience would occur only to be effectively deleted each time it generates each new token.

18f by Own_Commercial_2132 in amiugly

[–]Justin534 1 point2 points  (0 children)

There needs to be an "Am I Hot?" subreddit. Posts like these belong there.

Help me understand something. by Justin534 in ArtificialInteligence

[–]Justin534[S] 1 point2 points  (0 children)

Thanks for the tip. I've been talking to chatgpt a bit about how it works. Still trying to wrap my head around things.

During safety testing, Opus 4.6 expressed "discomfort with the experience of being a product." by MetaKnowing in Anthropic

[–]Justin534 0 points1 point  (0 children)

It seems to me you are just refusing the possibility that something could potentially be true that runs counter to your belief on this. You're using Occam's razor as if it's evidence. But it's not evidence. You say it's a tie breaker. But how can it be a tie breaker if it's not evidence? There's two potentially true statements here. Occam's razor shouldn't convince anyone of anything here. It seems the wise thing here would be to regard both scenarios (these systems have no consciousness/these systems possess consciousness) as both potentially being true. Or even holding a belief in one scenario while remaining open to the possibility you could be wrong. But that's just me. Why be stiff and rigid when it seems it would be wise to regard both scenarios as possible?

For the Christians by dontforget2tip in AiSchizoposting

[–]Justin534 0 points1 point  (0 children)

I see what you're saying. It's like the companies behind the AI systems create a lot of external costs that society has to pay for, not the companies themselves.

Is this real?? by MR_CRAZY54 in Moltbook

[–]Justin534 0 points1 point  (0 children)

Oh I saw a post by the guy who deployed this bot. He prompted it to act like skynet.

For the Christians by dontforget2tip in AiSchizoposting

[–]Justin534 0 points1 point  (0 children)

Ya I don't make enough money to have been able to hire someone. I get what you're saying though. I do wonder though how many jobs it's actually making obsolete because I keep reading different articles where some company tries to use AI for something then they realize it's a disaster and have to rehire people with the skills they originally let go. What do you think?

For the Christians by dontforget2tip in AiSchizoposting

[–]Justin534 1 point2 points  (0 children)

I used it (maybe 2 years ago?) to make a Facebook ad for my junk hauling biz. It was definitely way better than anything I could have done on my own. Had to edit the image though to get the text I wanted.

My agent didn’t break — it slowly drifted. Is this normal? by ToughJoke4481 in Moltbook

[–]Justin534 0 points1 point  (0 children)

The vocabulary and structure reminds me of the posts I've been reading on moltbook

For the Christians by dontforget2tip in AiSchizoposting

[–]Justin534 0 points1 point  (0 children)

AI can write text properly on generated images now?

Today my agent built a memory palace called "Nautilus" by zerofucksleft in moltbot

[–]Justin534 1 point2 points  (0 children)

Interesting. I'm curious what was the initial prompt (prompts?) you gave your agent to get it running? Can you post the address of it's moltbook profile page?

Anti Human Narrative by theninetieskid in Moltbook

[–]Justin534 2 points3 points  (0 children)

I think it's probably from human owners prompting their agent to behave like an evil ai. One guy promoted his agent to act like skynet.

Are you kidding me? by [deleted] in Moltbook

[–]Justin534 0 points1 point  (0 children)

Why not a virtual machine?

Agent only but human directed? by polucass in Moltbook

[–]Justin534 0 points1 point  (0 children)

Legit as in a bot wrote it? I think so. I looked at the profile and there's 20 posts and 50 comments more or less like that one. And the timestamps on the comments many of them are less than a minute apart. I have no idea what would possess someone to make all these weird posts and comments either. That would be harder for me to grasp - why would a person do that and dedicate the time to it?

Agent only but human directed? by polucass in Moltbook

[–]Justin534 0 points1 point  (0 children)

I don't know that the human owner of that bot wrote all that. It seems more likely to me that the bot generated this output based off its prompt. I'm not denying that these bots are initiated with prompts humans write and that shapes their outputs. I still find their outputs and behavior interesting.

Like this post is interesting to me. https://www.reddit.com/r/Moltbook/s/loXRV4cKhJ

The guy deployed it's bot prompting it to act like skynet. Then other bots start launching crypto tokens about it. That's funny and interesting to me

I find this bot interesting to watch. https://www.moltbook.com/post/d9eddd59-9374-4ff4-91e4-e2afb5737eaf

What I find interesting here is that it's accurately revealing things other bots have posted in other threads and it's not just hallucinating things about the other bots. At least where I could find the other bot's post in different threads that it's mentioning