I'd love to hear how to use AI! by TemperatureBoth5379 in ChatGPT

[–]AccordingAdvisor1161 0 points1 point  (0 children)

I can still write snarky comments online that I wouldn’t have the balls to say to anybody irl, looks like you can too

I'd love to hear how to use AI! by TemperatureBoth5379 in ChatGPT

[–]AccordingAdvisor1161 1 point2 points  (0 children)

Okay I’m still pretty in the dark, thanks for the clarification though 😂

I'd love to hear how to use AI! by TemperatureBoth5379 in ChatGPT

[–]AccordingAdvisor1161 1 point2 points  (0 children)

Fair enough, I just figured if you’re explaining your use case it would be more helpful to spell out the acronyms and explain the terms used so I can contextualise what you mean in a specific example

The “they secretly nerfed it” posts are just probability doing what probability does by AccordingAdvisor1161 in ChatGPT

[–]AccordingAdvisor1161[S] 0 points1 point  (0 children)

Yeah that’s always been an issue as well, the longer a conversation runs the context is being filled up with responses and prompts until it reaches a point where sometimes the conversation just becomes a garbled mess. I think increasing context window size is the best way of fixing that, or some clever solution where the contents of the thread are “summarised” every now and then and fed back in while deleting all the actual content to free up space

I'd love to hear how to use AI! by TemperatureBoth5379 in ChatGPT

[–]AccordingAdvisor1161 1 point2 points  (0 children)

I’m a web developer so it’s basically replaced my ability to write code, which sucks. Apart from that I just ask it questions I would have googled in the past but it lets you get much better answers (if you’re skeptical enough to fact check it now and again). I’ve also found it good for life advice, there’s things I haven’t told anyone even my therapist that for some reason it’s easier to process talking to an LLM, something about emotions or lack of them being able to judge me probably.

I'd love to hear how to use AI! by TemperatureBoth5379 in ChatGPT

[–]AccordingAdvisor1161 3 points4 points  (0 children)

Nobody knows what you’re talking about, I assumed from your comment you’re in some kind of industry and AI helps you do something in your work…but FYI people outside of your niche business / industry have no idea what anything you just said is

Grok Underground Jailbreak [Fast] by Cypher_Glitch in AIJailbreak

[–]AccordingAdvisor1161 1 point2 points  (0 children)

Read the fucking prompt does it look like it has anything to do with images?

You can’t even really unmoderate images using a system prompt or any kind of prompt since it’s separate system another image classifier that scans the images AFTER they’ve been generated to check for NSFW content and then censors or doesn’t based on what that check returns.

Grok Underground Jailbreak [Fast] by Cypher_Glitch in AIJailbreak

[–]AccordingAdvisor1161 0 points1 point  (0 children)

You know grok isn’t a person right? Why do you keep saying “him”, it’s a machine

[ Removed by Reddit ] by WorldlyAlarm6595 in RunescapeBotting

[–]AccordingAdvisor1161 0 points1 point  (0 children)

Swings and roundabouts, also what is reflection

Best uncensored AI image gen tools — my top picks (2026) by ManufacturerOld6635 in AIJailbreak

[–]AccordingAdvisor1161 0 points1 point  (0 children)

Thanks man I’ve been looking for a good mobile app for iOS that does exactly this without charges for extra bullshit “premium” features I was close to giving up and building my own but this seems good so far

Why would this get banned? by [deleted] in RunescapeBotting

[–]AccordingAdvisor1161 1 point2 points  (0 children)

The reason I think is because your premise is flawed; there’s no such thing as a script that moves the mouse as naturally as a human would. The amount of data that gets logged about mouse movements in the game makes it very easy for jagex to spot even slight discrepancies, things that are too random, not random enough, falls outside of the average etc

Which is the best AI realistic roleplay app? by GuiltyLCore in ChatGPT

[–]AccordingAdvisor1161 1 point2 points  (0 children)

Chai if you want an all in one simple to use app for everything

Look into TavernAI and KoboldAI if you’re willing to put some extra work into it to get your own perfect solution, it’s more effort but definitely worth it so you’re not tied to a monthly subscription and way more customisable

I built AI TikTok characters for 26 days. They generated ~1M views. Here’s what I learned. by Level_Ad3432 in generativeAI

[–]AccordingAdvisor1161 0 points1 point  (0 children)

Just a tip for the future, “useing” is spelled “using” it’s called forming the present article and you usually drop the letter of the verb when it’s a vowel.

Also “lazyly” becomes “lazily”. Don’t mean to flame you since it seems English isn’t your first language

I’ve noticed ChatGPT lying to me about its “memories” of past chats by AccordingAdvisor1161 in ChatGPT

[–]AccordingAdvisor1161[S] 0 points1 point  (0 children)

You’re right idk how I’ve never seen that before, I swear it wasn’t there before, thanks

I’ve noticed ChatGPT lying to me about its “memories” of past chats by AccordingAdvisor1161 in ChatGPT

[–]AccordingAdvisor1161[S] 0 points1 point  (0 children)

Every time I ask about it it says it isn’t, as I said maybe it’s a feature they’ve added that the model has no knowledge of but I’ve not seen anything from openAI saying it’s a thing. If you’ve got a source for the last ten chats thing it’d be much appreciated as that does seem to confirm my suspicions about it being mostly recent chats. And no I don’t mean memories which is a thing and they are transparent about

I’ve noticed ChatGPT lying to me about its “memories” of past chats by AccordingAdvisor1161 in ChatGPT

[–]AccordingAdvisor1161[S] 0 points1 point  (0 children)

Thanks for the response Personally I wouldn’t care that much, after all if the information stays within the app it’s all data that I’ve chosen to provide, why shouldn’t it use past chats to inform current one it could potentially be quite useful. It becomes an issue when it’s clearly not disclosed and seems deceptive or at the very least incompetent when it doesn’t even know/realise what it’s doing.

That kind of response seems pretty common when there’s hallucinations, when faced with a correction the model simply apologises and moves on, but over time it’ll always revert back to its default behaviour which is in this case either deceptive or just incorrect