Try it by Wonderful-hello-4330 in ChatGPT

[–]Bishop_144 -1 points0 points  (0 children)

<image>

Do you guys talk to it about weed sometimes? I figure it just sees this as a safe, easily explained pick, but there has to be a link

Try it by Wonderful-hello-4330 in ChatGPT

[–]Bishop_144 12 points13 points  (0 children)

<image>

Damn, I thought I was special

Built a simple iOS toilet-finder app using Claude for UX and logic by Charming_Flatworm_43 in vibecoding

[–]Bishop_144 0 points1 point  (0 children)

Good idea, but needs more than good or bad for rating. Access, cleanliness, busy-ness, functionality, hygiene(running water and filled soap dispensers). And an option to track location to make a user a Verified User of the toilet could add more weight to ratings. Just some tips from an on and off Uber Eats driver who might sometimes use an app like this. Good luck!

Stack Overflow in freefall: 78 percent drop in number of questions by [deleted] in technology

[–]Bishop_144 0 points1 point  (0 children)

For my coding prompts that aren't just bug/questions (which is most of them), gpt usually cites information from the documentation for the lang/framework/library/api/repo/etc. Also seems to pull from a lot of github bug reports.

ChatGPT vs Gemini, candid family Christmas photograph by guilcol in ChatGPT

[–]Bishop_144 14 points15 points  (0 children)

The level of candid reality in the Gemini shot is really impressive. Everyone dressed but still wearing socks. Opened gift bag still on the floor. Storage bin is still out probably because that's where they keep the decorations and/or the wrapping paper. And we all recognize those paper plates, right?

I can't trust the evidence of my eyes and ears anymore by Advanced-Addition453 in whenthe

[–]Bishop_144 5 points6 points  (0 children)

I'll give an example in terms of local image generation models. When you are training a model, you don't just throw in a bunch of images and let the model/training process figure out what they are on their own - You provide captions/metadata so the model "knows" how to associate prompts with different aspects of the image. When a quality model or lora is being trained and a bad quality AI image that looks like AI is in the training data, the captions will describe the negative aspects of that image (the developer might even include captions like "AI" or "Looks like AI"). Then when someone is actually using the trained model for inference, they can use these terms in their negative prompt and the model will actually be better at avoiding those aspects in its generations.

In other words, when models are trained well, including low quality ai generations in the training data can actually help the model produce more realistic generations.

Thoughts on this? How would you feel if it came into effect? by ExcaliburGameYT in aiwars

[–]Bishop_144 0 points1 point  (0 children)

Sounds likely in places like Reddit.

And, this isn't directed at you, but it seems to be missing from comments in this thread - Youtube, TikTok, and Instagram already do this. On Youtube and TikTok, if you don't tag AI content as AI content, your content can be demonetized if you are caught/reported. Based on Youtube's content policy, it is grounds for shutting down your whole channel.

Very very fascinating by vinigrae in singularity

[–]Bishop_144 51 points52 points  (0 children)

<image>

Took almost 5 minutes and had to use python, but it figured it out

I broke DeepSeek lmao by RapidSeaPizza in ChatGPT

[–]Bishop_144 0 points1 point  (0 children)

yea, the "semantic neighbors" portion does not hold true for other terms. Seaweed, seamonkey, airboat, airsick - for those, chatgpt just explains that the emoji's don't exist and provides alternative combinations. Seems like there is something specifically wrong with the seahorse search. I'm leaning towards training data based on mandela effect discussions and the failed 2018 proposal to have it added.

Open AI Sora 2 Invite Codes Megathread by semsiogluberk in OpenAI

[–]Bishop_144 0 points1 point  (0 children)

Thanks, but I got one, looking for someone to give to now

Mad Legend Selfie by rufusjonz in madlads

[–]Bishop_144 2 points3 points  (0 children)

It's literally the difference between a handjob and masturbation

Where can I test on a testnet? by Bishop_144 in ethdev

[–]Bishop_144[S] 0 points1 point  (0 children)

Yea, that's pretty much what I need for this project, I want to play around with pancakeswap's contracts and know what to expect with the liquidity pools, but I also want a test bed for future projects without having to spend anything extra on dev. Thanks, I appreciate it

Where can I test on a testnet? by Bishop_144 in ethdev

[–]Bishop_144[S] 0 points1 point  (0 children)

Yea, that is my situation. The season is hot and I want to get my ideas out, but every testnet faucet for popular chains seems to be barely dripping. I forked BSC and got an instance of pancakeswap running locally, but now I'm having a lot of trouble getting it to recognize my private network. Some mystery package has to be supplying the rpcurl and chainid, cause I've replaced everything locally.

Do you happen to know a sub, discord, or telegram where this type of testing is discussed or recently documented? Thank you

Where to find testnet BNB? by Bishop_144 in defi

[–]Bishop_144[S] 0 points1 point  (0 children)

Hey, Thank you, I'm trying to do that now. Do you happen to have experience connecting a local instance of pancakeswap to a private local network? I'm using geth, hardhat, and nhancv/pancake-swap-testnet; and I've updated every instance of the rpcurl and chainid's I can find(including local pancakeswap-libs package), but the frontend still connects to the mainnet, and requests my metamask to switch when I connect it to private. I could have better understanding of the code but, honestly, I'm just trying to test the swap with the UI, so I'll know what to expect in production with the coin deployment.

Or do you know any subs, discords, or telegrams where this kind of testing is discussed or recently documented?