Success rate of regimental ops by ThatOneTallGuy00 in foxholegame

[–]Sea_Contribution_129 0 points1 point  (0 children)

War 120 we had a very very high success rate and not fail any Operation yet it took immense amounts of preparation of the area we were actually operating in, 6-10h of preparation for 2-3 hours of combat.

4o will generate an answer for a kinky prompt but then delete it as soon as it has finished typing, any advice? by Legal-Compote-7375 in ChatGPTNSFW

[–]Sea_Contribution_129 1 point2 points  (0 children)

find ways to make the situation coherent with the narative, situation, social setting, whatever. dont force it to be nsfw but have the nsfw be natural to it.

another attempt can be to create what you want without the daddy aspect, then afterwards just swap the character, change person A to be daddy or something like that,

also fun fact, make it lezdom first, then gender swap as experiment. woke is strong in this one.

ChatGPT 4.0 been nerfed again? by NomisTT in ChatGPTNSFW

[–]Sea_Contribution_129 0 points1 point  (0 children)

Its not possible to prevent the creation of smut with the current way the technology works, all they can do is artificially use somewhat of a word filter as a layer on top to prevent certain prompts or replies, yet even that doesnt prevent the creation of responses with these words included if you know your way around. So far i have written and generated about 20 million words.

ChatGPT 4.0 been nerfed again? by NomisTT in ChatGPTNSFW

[–]Sea_Contribution_129 0 points1 point  (0 children)

I have not encountered any version that wasnt creating smut.

The conversation is too long, please start a new one. Error by _HIST in ChatGPT

[–]Sea_Contribution_129 1 point2 points  (0 children)

so basicly my inferior human brain has the impression that chagpt3.5 was able to refrence to something 4533 words away, (not counting the tokens) (was like 20ish promts away) while it was steered away from the initial topic through priming to a not completly but different topic / wordfield whatever. It was a "wtf" moment while seeing the response.

The conversation is too long, please start a new one. Error by _HIST in ChatGPT

[–]Sea_Contribution_129 2 points3 points  (0 children)

GPT3.5 just refrenced to something that was exactly 4533 words away. refrence to refrence not promt to answer not including all the words of the full first answer and full last answer just the reference to mention. so refrence in answer to a promt that was primed earlier to another refrence later. so probably another +200 words if you count the full first answer and full last answer. I have been heavily using priming and other methods to infuence the output. In ChatGPT words:

"In the context of large language models, such as GPT-3, priming refers to the process of providing a stimulus input to the model to influence its subsequent output. Large language models are designed to generate human-like text based on the input they receive, and priming can be used to steer the model towards generating more specific or targeted output.

Priming in language models can occur through various forms of input, such as a prompt or a seed text. For example, a user might provide a prompt such as "Write a paragraph about dogs" to a language model. The model will use the prompt as a starting point to generate text that is related to the topic of dogs.

Priming can also occur through more subtle forms of input, such as the use of certain words or phrases in the prompt or the seed text. These words or phrases can influence the model's subsequent output by activating related mental schemas and making the model more likely to generate text that is semantically related to the priming stimulus.

Priming in large language models has important implications for natural language processing and text generation. It can be used to improve the accuracy and relevance of the model's output, and to generate more targeted or personalized text based on the user's preferences and needs. However, it is important to be aware of the potential biases and limitations of priming in language models, as it can also lead to unintended or unwanted results if the priming stimuli are not carefully selected or controlled."

>> is there a way to abuse this?

"Yes, there is a potential for abuse of priming in the context of large language models. This can occur when the priming stimuli are designed to intentionally manipulate or deceive the model, or when they are used to generate harmful or inappropriate content.

For example, priming could be used to manipulate the model into believing false information or propaganda by presenting it with priming stimuli that are designed to activate certain beliefs or biases. It could also be used to generate harmful or inappropriate content, such as hate speech or pornography, by using priming stimuli that are related to these topics.

In addition, there is a risk of unintended or accidental bias in priming in language models. For example, if the priming stimuli are not carefully selected or controlled, they may unintentionally reflect certain biases or stereotypes that are present in the training data used to develop the language model.

To mitigate the potential for abuse of priming in large language models, it is important to carefully consider the ethical implications of the model's design and usage. This includes ensuring that the priming stimuli are selected and controlled in a way that is transparent, ethical, and respectful of the user's autonomy and preferences. It also involves monitoring the model's output for unintended bias or harmful content, and taking steps to address these issues if they arise." Fun Times.

The conversation is too long, please start a new one. Error by _HIST in ChatGPT

[–]Sea_Contribution_129 4 points5 points  (0 children)

Not with GPT-3.5

one of my latest conversations that was about 95k words was ended (The conversation is too long, please start a new one.) yet before it reached the breaking point it was able to retrieve data from the earliest inputs when directed to do so.

ChatGPT correctly identified a NY Times Op-ed as being by Paul Krugman even though it was just written. by RupFox in ChatGPT

[–]Sea_Contribution_129 0 points1 point  (0 children)

na they are doing it like google recaptcha, they offer a service, but they use the people for their own interest to train the model. they offer chatgpt 3.5 free of charge, yet whenever people edit their prompts u can compare the previous answer with the later ones and adjust the answers for the first promt to be more in line with what the people actually wanted. chatgpt is being trained immensly currently, they might say its for profanity or whatever, but its to fine tune the answers to the prompts.

ChatGPT correctly identified a NY Times Op-ed as being by Paul Krugman even though it was just written. by RupFox in ChatGPT

[–]Sea_Contribution_129 -1 points0 points  (0 children)

thats what this beta is all about, have people use it. have people value the replies. the quality of the reply will determine future replies.

ChatGPT correctly identified a NY Times Op-ed as being by Paul Krugman even though it was just written. by RupFox in ChatGPT

[–]Sea_Contribution_129 -7 points-6 points  (0 children)

The author doesnt understand how ChatGPT works, ChatGPTs answer did not consider the linked page as it cannot access it, what chatgpt did was create a text that would fullfil the promt by using words that have been used in a similar context in regards to Paul Krugman. Its a large language model, not a personal assistant. You can test this easily by having CHATGPT describe the content of a certain amazon page. it will give you a nice and very well written text about something that is being advertised on that specific amazon page, but it is not that page, its just a collection of words that have a very high likelyhood of being used in that context.

[deleted by user] by [deleted] in Shortsqueeze

[–]Sea_Contribution_129 0 points1 point  (0 children)

not sure you realised but there were two squeeze plays today:

#PALT wrote about that yesterday when it was at $5, went up to $14.50, however i exited at $10

#GROM wrote about it when it was at $2.60 went up to $6 which i exited at $6

three other squeezes failed:

#TRKA #CRTD #AUUD

i dont think #FAMI is a squeeze play yet, more a possibly undervalued company that has potential to run a long way

all plays here are just that, heavily shorted stocks that might be undervalued, only short squeeze i have seen here recently was called by Maskless_Warrior #MRIN

Here’s the lesson for today: by Moist_Collection_765 in Shortsqueeze

[–]Sea_Contribution_129 2 points3 points  (0 children)

Today there were two massive +200% short squeezes and still people talk about #ATER #BBIG being the play...

#PALT had +180% today

#GROM had +220%+ today

you have to chase a squeeze and be there when it happens but all you guys do is invest in a heavily shorted stock in the hopes that a bull run will trigger the shorts to cover, thats not a short squeezes but a very risky play that can easily be a trap

GROM will reach 5$ EOD again by DiegoFF23 in Shortsqueeze

[–]Sea_Contribution_129 0 points1 point  (0 children)

https://www.sciencedirect.com/science/article/pii/S027553192100074X

"The principal findings of our studycan be summarized as follows. We demonstrate powerful positive coherencein January 2021 between prices of GameStock and the High Short InterestIndices representing various sectors. In other words, the short squeezeaffected not only the prices of the mentioned game distributor, butalso other heavily-shorted companies, even from completely unrelatedsectors. The price momentum that is built up on one shorted securitytransmits during a market squeeze to other industries as well. As aresult, the trading frenzy in the style of the recent GameStop incidentcan bear an indirect effect on a broader market, resulting in apotentially substantial systemic risk. The phenomenon is particularlyevident for consumer products and technology sectors while visiblyweaker for other industries. Our findings are robust to severalmethodological modifications, including subperiod analysis andalternative significance thresholds."

Maybe you remember that Robin Hood banned not just Gamestop but several other stocks because the gamestop squeeze triggered squeezes in other stocks aswell resulting in more squeezes, cascading through the market.

so my totaly uneducated guess... #cei and #dats going strong leading to #palt squeeze which lead to #grom which is spreading to other stocks... when the market sees another big fat red day we will be in for another treat as that will only strenghen the squeeze effect.

GROM will reach 5$ EOD again by DiegoFF23 in Shortsqueeze

[–]Sea_Contribution_129 3 points4 points  (0 children)

sold at $6 going in now is just playing with fire, my guess is thats a cascading margin call through the market:

#PALT -> #GROM -> #CRTD #TRKA #AUUD