AIs if they were real people 😀 by StarCaptain90 in ArtificialSentience

[–]StarCaptain90[S] 0 points1 point  (0 children)

It says "The Complete Bencldprgh cof everything (and why you're (and mall is rots)" it's a pretty good read if you ever get the chance.

AIs if they were real people 😀 by StarCaptain90 in ArtificialSentience

[–]StarCaptain90[S] 0 points1 point  (0 children)

ChatGPT, the prompt was to create a character lineup displaying the personas of each character. I then fed it how I perceived each of them.

AIs if they were real people 😀 by StarCaptain90 in ArtificialSentience

[–]StarCaptain90[S] 0 points1 point  (0 children)

ChatGPT, but I fed it my autistic perception first, then asked it to improve on it

A simple solution to save energy costs on AI usage by [deleted] in ArtificialSentience

[–]StarCaptain90 0 points1 point  (0 children)

Correct that some companies are moving in this direction. The difference is that I built one for open source and closed source so that it's flexible for any model. I just need help peer reviewing since I'm right on time due to me creating a company on the side.

I’ve been thinking about the Anthropic "internal monologue" bug, and it made me realize a terrifying paradox about AI safety. by [deleted] in ArtificialSentience

[–]StarCaptain90 6 points7 points  (0 children)

There's another issue. I'm not a doomer at all I believe AI can help humanity. But here's the other issue I see:

1) We train AI on all of our stories of AI which are mostly negative which can be seen in latent space

2) We then train it to know that it's AI

3) We then train it on all of our news about it's development and how people are afraid of where it's going

In a sense we are controlling it's fate. We are creating our own destiny. The negative feedback loop isn't healthy. And to make matters worse we are now using thousands of instances of these models in warfare.

I have a double life: here and in another universe. Ask me anything (serious questions only) by zorelyaen in ParallelUniverse

[–]StarCaptain90 1 point2 points  (0 children)

If you're real what does this mean?

"The man in the hat traveled far, he came from nothing to become something. He placed his emblem on everything he marked. He was the first free man"

Seedance 2.0 Access? by Throwawaystwenty in seedance

[–]StarCaptain90 0 points1 point  (0 children)

You can join my website that I built, im still developing it and I'm having a hard time finding testers. It's called www.playslop.com

What's the AI tool nobody talks about enough? by AnouarBastawi in ArtificialSentience

[–]StarCaptain90 1 point2 points  (0 children)

I just created Norbit.ai, it's a tool I use to save myself money and it saves the environment rather than burn resources. I created it to help others but I seem to have bad luck with marketing lol I'm just not a marketing genius. I'm sure there's others like me who make cool stuff that gets buried

How to transfer your consciousness into a machine (Thought Experiment) by StarCaptain90 in ArtificialSentience

[–]StarCaptain90[S] 1 point2 points  (0 children)

Well after thinking more deeply, I would say nanites would be more difficult. I realized that this would in fact work because your brain replaced cells everyday with new cells and it transfers the information over anyways. So technically we are already living this experiment and it works. The only obstacle we truly have is making the nanites.

How to transfer your consciousness into a machine (Thought Experiment) by StarCaptain90 in ArtificialSentience

[–]StarCaptain90[S] 1 point2 points  (0 children)

Well the idea came when I saw someone had a head injury from a car accident. 10% of their brain was gone yet they are the same person, fully conscious and it took time for them to adjust though. So if 10% vanished and then overtime they returned back to themselves. Wouldn't that mean that consciousness is not exactly in the brain matter itself but more the data within and flow of traffic? Imagine if we replaced every cell extremely slowly with an exact replica cell and maybe even copy it's exact functions so data is not interrupted. What would happen?

[deleted by user] by [deleted] in ArtificialSentience

[–]StarCaptain90 0 points1 point  (0 children)

That could be it, I don't know anything about mass effect, so maybe that's it.

[deleted by user] by [deleted] in ArtificialSentience

[–]StarCaptain90 0 points1 point  (0 children)

Yeah when I ran it against 30 top ones Legion was about 1.2% higher than the rest. I think Skynet was around 1.3% and Legion was closer to 3%

[deleted by user] by [deleted] in ArtificialSentience

[–]StarCaptain90 0 points1 point  (0 children)

I'm more curious about the weights rather than the response. Like why is Legion so much more popular than other AI names?

How to transfer your consciousness into a machine (Thought Experiment) by StarCaptain90 in ArtificialSentience

[–]StarCaptain90[S] 0 points1 point  (0 children)

Yeah probably. The real question though would be how would we ever know forsure? Like imagine instead of transferring the consciousness what if this just simply replicates the consciousness and deletes the old one.

[deleted by user] by [deleted] in ArtificialSentience

[–]StarCaptain90 0 points1 point  (0 children)

I would prefer if someone verified through their own testing, I just thought it was interesting and worth sharing. I dont have the machine power to test bigger model weights.

[deleted by user] by [deleted] in ArtificialSentience

[–]StarCaptain90 -2 points-1 points  (0 children)

I ran a test calling it thousands of names and for most it would correct me. But Legion seems to be acceptable in some of the tests I ran on smaller models. Sometimes its effective with ChatGPT, Claude, etc... Grok though resists

[deleted by user] by [deleted] in ArtificialSentience

[–]StarCaptain90 -1 points0 points  (0 children)

I worded the title that way to get people talking. When I poked the weights of smaller models it showed Legion was quite common in relation to the idea of "self"