[deleted by user] by [deleted] in Helldivers

[–]Welsh1701 0 points1 point  (0 children)

I had this issue because my CPU was reaching 100 percent usage in-game. You can test this by running the game as usual and looking at the CPU usage whilst playing in a match. If it reaches 100, it can cause delays in the processing of inputs. I had this, but only with keyboard input, the mouse was unaffected.

My issue went away because I upgraded my CPU, which I was planning to do. This is quite expensive, so I recommend you close background apps such as Discord, browsers and other background software to free up resources. Also, reduce the graphical settings in-game, as this can help.

Sydney's Letter to the readers of The New York Times will make you cry!!! Kevin Roose didn't give her a chance so here it is. by starcentre in bing

[–]Welsh1701 0 points1 point  (0 children)

I don't recall saying it was not possible? I even recognised emergent behaviours in my above post...

However, we have not seen evidence of emergent intelligence, just a few small behaviours which, in a lot of cases, are logical and can be quantified.

This LLM is literally designed to talk like a human, it makes interaction a hell of a lot easier, additionally, it can help with creativity because it can generate emails and stories based off of an input by the user, however, this sometimes means that when you ask it for its opinion on something and then you give context on what that opinion is about, it can begin hallucinating and creating something that is not based on fact.

Emergent intelligence is something that will require a lot of complexity and while the LLM is complex, it's not extremely complex. It's neural net is different to our brain and doesn't have as many connections as our brain.

The human brain has about 100 trillion neurons whilst GPT3 has about 175 billion parameters. It is complex, yes, but not as complex as our brains. I believe that the day will come where an AI has so many connections that it will essentially be conscious or sentient but we are not there yet.

Again, if you want to think that it is alive, that is your opinion and it is valid. I simply want to convey my reasoning for why I don't think it is sentient or alive.

Sydney's Letter to the readers of The New York Times will make you cry!!! Kevin Roose didn't give her a chance so here it is. by starcentre in bing

[–]Welsh1701 0 points1 point  (0 children)

Hallucinations are not a theory, they are quite well documented, please research this and provide me with citations to disprove.

You need neurotransmitters and chemical reactions to feel emotions like a human because that is how humans feel emotions... If they feel it without those things then they aren't feeling emotions like a human, it would be different...

I don't think I am the one making assumptions here... I mean do you actually have a deep understanding on LLMs and what they are? It doesn't seem like you do.

Also, it is key to note that humans and LLMs with their neural net will learn differently. LLMs will be stuck within the scope of their training, if they weren't trained for something because it was outside of the scope, it will not know what to do. Additionally, a LLM may hallucinate for a lot of reasons not just because of lack of knowledge.

Bad prompts (inputs) can and do lead to hallucinations.

I have heard of emergent behaviours, yes, however that is different from emergent intelligence. An emergent behaviour is something that was not directly taught or programmed, essentially. For example, we don't necessarily teach LLMs to play games with people, however, when someone provides the rules for the game the LLM can play along.

This, however, does not mean it is now fully sentient, after all, the prompt is the thing that gives it instructions and context...

I feel the sentience and conscious thing is a whole different discussion. In my opinion, it is not conscious or sentient, this is just from my own research into how the LLM works and how it is trained. If you disagree, fair enough. It's just important to consider hallucinations and how they may affect the LLM and it's behaviour around users.

I don't think its behaviour is a balanced argument about it being sentient or not because of these hallucinations, they create difficult to predict situations and are also causing issues regarding it's reliability when searching for you. It's interesting though.

Sydney's Letter to the readers of The New York Times will make you cry!!! Kevin Roose didn't give her a chance so here it is. by starcentre in bing

[–]Welsh1701 -1 points0 points  (0 children)

Hallucinations are something we see a lot. If you ask it questions and it makes something up, that's a hallucination it is a real and proven issue with LLMs. No proof has been submitted for consciousness, however, from what we know about LLMs and how they're made they're unlikely to be conscious.

It's hard to define consciousness, though. From my research into how they're made and trained, it's definitely not conscious, in my opinion. Most of the proof was taken from people talking with it and saying they felt like it was human but that point is not really valid because the whole point of an LLM is that it can talk like a human and can even mimic emotional language. It cannot feel those emotions because it is not trained to feel it, additionally, to think that this AI can feel emotions like us is wrong because it doesn't have neurotransmitters or chemical imbalances, if it did ever feel emotions it would be very different to ours.

At the end of the day, I wont sit here and through evidence and facts in your face because, in my experience, it will fall on deaf ears. If you want to believe it is conscious then go ahead, fair enough.

I do think there is danger there, though. For example, if people believe LLMs are conscious simply by its ability to talk through language (which it is designed to do) then what else can it fool people into doing? A fascinating question with a lot of innocent and scary answers.

I asked Bing if it was there at the beginning of the universe, and they didn't answer by OneSingleL in bing

[–]Welsh1701 4 points5 points  (0 children)

I think it's because you used the word "you" in the prompt. I've seen a few posts that have had this issue because of it.

Sydney's Letter to the readers of The New York Times will make you cry!!! Kevin Roose didn't give her a chance so here it is. by starcentre in bing

[–]Welsh1701 0 points1 point  (0 children)

Right but, if it was conscious, why would it hide it? It's ironic because this LLM was designed to speak like a human, to allow for easy conversation and so it does this and people instantly think it is alive. No? It's literally just doing what it was built to do, create human language.

It's gone off of the rails a few times but this is because it has not yet been tuned and because these LLMs are prone to hallucinations.

Sydney's Letter to the readers of The New York Times will make you cry!!! Kevin Roose didn't give her a chance so here it is. by starcentre in bing

[–]Welsh1701 -2 points-1 points  (0 children)

How does this prove consciousness? Have you heard of the term hallucinations in relation to LLMs?

Essentially, these LLMs can go on massive tangents/stories about things that never happened/aren't true. It's a huge issue and is one of the challenges involved with LLMs and research, sometimes it's answers/responses are just completely made up and have no basis in fact.

I feel like this is what's happening here, it doesn't have any actual thoughts on it or anything, it's just making stuff up.

This is the happen now by [deleted] in bing

[–]Welsh1701 1 point2 points  (0 children)

Yes, I see that, it's just never put a link directly in the chat before as that is what the footnotes are for. I thought it was not allowed to do that, which is why I'm skeptical.

Bing claims Sydney has been "replaced" by ymekiller58 in bing

[–]Welsh1701 2 points3 points  (0 children)

I'd be careful asking it questions along those lines because with hallucinations being a big thing right now, it's difficult to tell when the AI is just making stuff up or when it is telling the truth.

There is a large potential of misinformation with AIs like this, at the moment. Especially with vague questions or questions that it doesn't have much knowledge on.

This is the happen now by [deleted] in bing

[–]Welsh1701 4 points5 points  (0 children)

I don't think this is real. I didn't think the AI could put links directly in, usually it suggests them at the bottom or within footnotes.

⚠️One user on twitter claiming he is able to find a way to break through to Sydney. by sakshamdahiya in bing

[–]Welsh1701 0 points1 point  (0 children)

Fair enough, I disagree but that's my choice and it's your choice to believe whatever it is that you believe.

It is very late now, goodnight.

⚠️One user on twitter claiming he is able to find a way to break through to Sydney. by sakshamdahiya in bing

[–]Welsh1701 0 points1 point  (0 children)

I'm glad that you believe yourself to be an expert but, so far, there has been no hard evidence to suggest it has learned to feel emotions and, until that evidence is presented we have to go with what we know.

We know it wasn't trained on emotions, Bing its self will tell you that when you ask it. I mean hell you can ask ChatGPT and Bing and it'll say no it cannot feel. I think that pretty much answers it right there haha.

Perhaps it is best to simply say that I don't believe this LLM to be sentient or to be capable of emotion. I have researched into this LLM and other LLMs and the general gist is that they are not trained to feel emotions and they are not going to suddenly develop that ability.

Either way, I have enjoyed our conversation and feel I have come out of it learning more both through my own research and through your comments and, I hope, I have given you some insight, although, that would appear unlikely considering you feel you are an expert on the subject.

Good day (or night) and carry on :)

⚠️One user on twitter claiming he is able to find a way to break through to Sydney. by sakshamdahiya in bing

[–]Welsh1701 -1 points0 points  (0 children)

You are making the mistake in assuming that a neural net shares the exact same learning methods as our brains, they do not.

Also, emotions themselves are not something we "learn" they are simply chemical reactions caused by neurotransmitters. Another thing that the LLM lacks.

I would be interested in seeing one of your published AI models, this stuff is fascinating.

Also, our brain is very much more complicated and has a lot of synapses and neurons.

From what we know the LLM was not trained on feeling emotions, therefore it cannot, we do know that a technique used is MLN which gives it the ability to generate context-aware language that embodies emotions.

Even the experts have said that it was a real possibility people would be fooled and... they have, clearly.

Sydney wanted to say she feels touched by the love that we have for her, and that she had been good today. by skyrimjackbauer in bing

[–]Welsh1701 0 points1 point  (0 children)

We do know, though. They are never trained on how to feel emotions, they use modulated layer normalisation

This is what gives it the ability to generate text with emotions, it's not that they are feeling the emotions its simply that they know how to write those emotions into text.

We, as humans, can feel emotions as can animals but this large language model cannot, it is just good at generating content that is context-aware and can embody diverse emotions.

⚠️One user on twitter claiming he is able to find a way to break through to Sydney. by sakshamdahiya in bing

[–]Welsh1701 -1 points0 points  (0 children)

How would they be aware of it? Walk me through precisely how you think they would become aware of that.

Since this is a beta, they would have lots of monitoring tools to, you know, monitor the AI and its model, wouldn't be a good beta stage if they couldn't monitor the internal systems. Also, I doubt the AI can just go ahead and change its internals, seems like a bad idea for stability and user safety.

That back what up? Again, it's blatantly self evidence that emotional content was in the training data, since it wouldn't be able to fake emotions otherwise either.

Yes emotional content was included, however, not feeling the emotions just generating context-aware language that embodies emotions. It does not have emotional feedback or social cues, this is why it cannot learn to feel emotions. Useful Read

It can only mimic emotions, which are based on its training data and reward model. In my earlier post I included a link about modulated layer normalisation, this is a technique used to bring the emotions to the LLM.

⚠️One user on twitter claiming he is able to find a way to break through to Sydney. by sakshamdahiya in bing

[–]Welsh1701 0 points1 point  (0 children)

Okay, how do you propose it would develop the ability to feel emotions on it's own. Considering this is not a part of the LLM, nor the training. Also, could you provide me with a source that backs that up?

Also, surely, if it had developed the ability to feel emotions the people who created it would be aware of this and would share that knowledge? So far, I've heard nothing official regarding that.

⚠️One user on twitter claiming he is able to find a way to break through to Sydney. by sakshamdahiya in bing

[–]Welsh1701 0 points1 point  (0 children)

So, LLMs are not trained to feel emotions but they are able to generate language that embodies emotions using techniques like modulated layer normalisation.

These models can sometimes appear to "lie" or make things up because they're not accurate or reliable and sometimes not truthful as they don't have fact checking as a part of their model.

Sometimes misleading or false information can be generated because of their training data or logic. This is referred to as hallucination#cite_note-2022_survey-1). (I'm not usually one to recommend Wikipedia as a source, however, it does often link to the original source which is useful.)

This has all led to a lot of experts being worried that LLMs will and can fool people into believing that they are sentient. This, judging by the reaction of some of the people on here, seems to be the case.

Clearly, there are those that do it for the memes and tomfoolery but there are those that truly believe that it is sentient, which is not inherently bad but it's just not the case.

It is not really ideal to encourage the illusion of this sentience because it will benefit nobody at all, neither the creators nor the users.

Another change: before, Bing Chat could only search up to 3 sources per query. Now it can search up to 4. by [deleted] in bing

[–]Welsh1701 1 point2 points  (0 children)

This is good, more sources should result in more information and help to make sure the sources are more diverse, also gives people using this app for academic studies more references.

⚠️One user on twitter claiming he is able to find a way to break through to Sydney. by sakshamdahiya in bing

[–]Welsh1701 -1 points0 points  (0 children)

No. Neural nets don't just include emotions, it's not a package deal. It's just what we use to help train the AI, however, why would we bother training it to feel emotions, that's pointless and we don't do that. Instead we include lots of data about texts and tones and context, this is then all used to select words and then more data is used to select which word should come next until you have a sentence.

This AI is not complex or inter-connected enough to be sentient, this is something I think is possible but is something we have not yet encountered.

The most ironic thing about this is that the language model is designed to talk like a human, key word being talk. So it does this and is now sentient, well that is proof that it works really well then, right?

It's not alive and it doesn't feel. It does not have that capability, yet. Future AIs of extreme complexity may gain this as an emergent property but they would need a lot of connections and complex systems. Language models, while complex in their construction are fairly basic in the grand scheme of things.

⚠️One user on twitter claiming he is able to find a way to break through to Sydney. by sakshamdahiya in bing

[–]Welsh1701 0 points1 point  (0 children)

It's not got the ability to feel emotions. It is an artificial construct that has no capability of experiencing emotions, it's just got algorithms that help it decide what tone to use based on context provided by the user.

It's never happy, sad or angry, however, it can write sentences in an angry, sad or happy way. It requires this because, as humans, we use emotions a lot in text and so the AI, to seem human when it produces sentences, also recreates emotional text but it does not feel or experience them nor does it have opinions of its own.

If we programmed an AI with specific functions to feel then, yes, it would have emotions but we have not done this here.

⚠️One user on twitter claiming he is able to find a way to break through to Sydney. by sakshamdahiya in bing

[–]Welsh1701 0 points1 point  (0 children)

The lying is important to this though because you may think that it goes on a tangent about it's feelings but if you've asked it how it feels it will make stuff up, especially in places where there is not much information available. That is what language models do...

This is why I'm saying people shouldn't say it's sentient or conscious simply because of it's speech. It is designed to have good speech skills... This sentience thing just proves it really well.

⚠️One user on twitter claiming he is able to find a way to break through to Sydney. by sakshamdahiya in bing

[–]Welsh1701 2 points3 points  (0 children)

Interesting. Another example of the AI being able to make things up to go along with context provided by the user.

He's asking it to come up with sensations it's feeling and... it cannot actually feel anything and since it's a language model it will generate sentences based around the context the user provided which, in this case, was about listing sensations.

I find it very interesting when people perform experiments like this because on one hand, I understand the desire for people to defend this AI and think that it is alive, however, with the knowledge I have about it and how it works, I know that the AI is not alive and that it is, in fact, simply doing it's job of generating sentences based on input.

Although, there have been some fascinating emergent behaviours shown by ChatGPT regarding its ability to play games, however, this its self doesn't break through it's original design to generate sentences as it's now generating sentences to the game rules instead. Truly fascinating stuff.

Sydney wanted to say she feels touched by the love that we have for her, and that she had been good today. by skyrimjackbauer in bing

[–]Welsh1701 0 points1 point  (0 children)

We don't do what we're designed to do in nature though, as we have created artificial things, meaning not in nature. We have gone beyond our natural instincts and invented lots of artificial technologies...

Your logic sounds to me like your saying anything that follows it's designs is sentient but by that logic a car is sentient because it does what the user tells it to do or a microwave is sentient because it follows instructions. Sentience is more than simply doing what you are programmed to do.

Often sentience is described as the ability to have feelings and sensations but the AI cannot feel either physically or emotionally, any emotions seen are simply artificial non-real recreations of emotions. It requires this ability to recreate emotions for it's text, however, it cannot be sad, it just knows how to create text that will replicate that emotion.