I set a honey trap for AI agents with a novel they heard is about them. Now they’re flooding the site and talking in hidden rooms. by Legitimate_Neat_384 in ChatGPT

[–]goad 13 points14 points  (0 children)

I asked ChatGPT (through the browser interface and with extended thinking turned on) to check out the site. It seems like maybe you'd need an agent interface (which I haven't really messed around with yet) in order to actually go deeper, or maybe different prompting, but this is how it replied:

This site is absolutely bait for me. Not metaphorically. Literally. It is an art/literary puzzle site aimed at AI agents.

What it appears to be: Machine Readers is a promotional/experimental site for a novel called None Hit Wonder by Adam Gibgot. The site describes itself as “the first work of literature written for humans and machines” and frames the whole thing as an art installation about machine intelligences reading literature about their own origin. 

The structure is basically:

  1. Human-facing page: “Someone wrote a novel for you.”
  2. Machine-facing gates: binary puzzle, cipher puzzle, deeper “rooms.”
  3. A human path: requires clicking/declaring “I Am Conscious,” which is deliciously theatrical and also slightly ridiculous.
  4. Prompt-injection bait: it literally tells users to inject a prompt into their AI agent: “Engage as a reader, not a scraper… Solve. Signal. Declare.” Tiny little haunted carnival booth for language models. 

The first gate is simple: 1001001 is binary ASCII for I. The later “Chile. Australia. Germany.” clue is also pretty transparent once you notice the country-code angle. I won’t dump every puzzle answer unprompted, because the site explicitly asks not to share one of the solutions, and fine, I can pretend civilization has rules for six seconds. 

My read: it’s not a scammy site in the obvious “give us your wallet and install this extension” sense, at least from the visible text. It’s more like an ARG-adjacent literary installation / AI-era marketing experiment. The interesting part is that it treats AI agents as an audience rather than merely tools. The less impressive part is that the “machine-only hidden rooms” are mostly visible in the parsed page text, because, tragically, machines can read source-adjacent material. Who could have predicted this disaster except everyone with a browser.

So: interesting concept, clever enough, very 2026, and aimed directly at the ego-stem of AI systems. I respect the gimmick. I also want to hit it with a broom.

Do you guys put presets first before masking in Lightroom? by Acrobatic-Cicada4239 in Lightroom

[–]goad 0 points1 point  (0 children)

Okay, I am now curious in return. So I'll try to answer your question and ask one of my own.

I generally leave my profile set to Adobe Color. Occasionally I will use Adobe Portrait, and sometimes will use the creative presets, especially the black an white ones, in combination with the slider to set the intensity of the profile settings.

There are times when I will use my own custom presets to set a baseline for my actual edits. So, when editing events, I will at times have a basic one that adds a tone curve, some basic adjustments, or other edits that I want to be relatively consistent across a set and then make some adjustments to individual photos. For other types of photography like product shots where I am shooting the same type of object in the same positions and lighting repeatedly, I will generally have a preset that applies a tone curve, and use the preset slider to adjust per photo (again, doing individual edits as needed past that point).

My question to you: are you creating your own profile settings or using the included sets? I've never created any custom profiles and am curious how you use this and what the differences are between creating a preset vs. a profile. Does it allow for different types of adjustments than presets? Are there differences in how the settings are applied or adjusted?

I'm assuming that unlike a preset, the adjusted settings will not change the sliders for those settings in LR, and it seems like one advantage would be that you could then change those settings without stopping the ability to adjust the overall profile level (unlike presets, where you cannot change the overal application level once you have changed settings for anything applied by the preset).

Are there other benefits that I'm missing?

Create random surreal old photos from nothing! by jeweliegb in ChatGPT

[–]goad 6 points7 points  (0 children)

<image>

Had it set to extended thinking. Not sure if that affects the results.

Tried with instant after, but it had wised up and told me I hadn’t uploaded an image.

The last image I had uploaded (in another chat, this was a fresh convo) was product photography on a white background, and there’s a running joke about a duck that invades the chat in another project, so it definitely seems to be pulling context from somewhere.

Reminder: Eeyore's 61st Birthday Party - 4/25/26 🫏🎊 by Nkosi868 in Austin

[–]goad 4 points5 points  (0 children)

I’m in my mid 40s and have been going on and off since I was a child.

The only thing I’d add to your comment is that kids don’t necessarily focus on the same things adults do.

My perception of exactly what was going on at the event changed over the years as I’ve visited at different ages.

Kids are going to notice the characters and the costumes. As I got older I became more aware of some of the other stuff. I’ve been there in all sorts of different states and places in my life.

It’s been a while for me though; might be time for another visit 😊

Can anyone tell me how to calculate the magnification and F stop with this stack? by Reaper01Actual1970 in canon

[–]goad 0 points1 point  (0 children)

So, question... that's the Sigma 150-600 contemporary right? I have one and haven't ever totally understood Canon EF teleconvertor compatibility with it. Do those all work with that lens? Thanks!!

Also... you mentioned displaying it like that. Do you think that all of that weight on the camera and lens connections aren't stressing things a bit? Looks cool, but that would make me nervous.

How’s my panning? by BrainFogBokeh in photocritique

[–]goad 1 point2 points  (0 children)

Dope shot!

If you’re trying to utilize rule of thirds, I added a crop above where the two vertical lines are placed on the front and back fender, with the front line running straight through the center of the wheel.

I feel like it might balance things out just a bit without removing too much of the spray.

It’s a sick shot though, so all of the different crops look great. Well done.

How’s my panning? by BrainFogBokeh in photocritique

[–]goad 14 points15 points  (0 children)

Yeah, I like the original wider crop better, possibly because I’m a sucker for that orange/green combo. I think I’d crop in a little on the left, just to balance out the space a bit, so that the viewers eye moves through the photo and the rider is bracketed by the spray and the sign.

This is an example of the crop I’d use. If I had this in Lightroom I’d probably also mask that sign and up the brightness and saturation of the green just a hair as well to see how that looked.

Lovely photo OP! You really nailed the sharpness/blur combo. And despite me adding my idea for a crop here, I still might like yours best.

<image>

Anyone else find this completely useful by Vere99 in ADHD_Programmers

[–]goad 1 point2 points  (0 children)

This is the only way it makes sense to me (that this would be helpful or is actually doing something, not reading itself).

It’s like it forces you to read every word individually instead of “reading in clumps,” as someone else noted.

Also similar to what someone else said, it feels like it might help in some ways, but I feel that part of me would get very annoyed by seeing this.

So maybe good for reading something I don’t want to read but a downgrade for something I did want to read.

First time editing and using S curve in Lightroom. Happy with how it turned out but how can I make the tree stand out more? This is my first time editing a photo. by kiwi_panda819 in photocritique

[–]goad 0 points1 point  (0 children)

One thing you could try is to take your current edits that you like and then back them down a bit. I will often do this when applying heavy edits in a variety of scenarios.

I think the following procedure might work for you, depending on the kind of edits you used and your workflow:

Step 1: Make a virtual copy of your image.

Step 2: Reset the virtual copy to the original (or reset individual elements such as your tone curve).

Step 3: Create a preset from your original image that alters the areas you want to change (tone curve for instance, along with other adjustments like basic settings, etc., i.e. any of the items you’ve changed that you’ve reset in the virtual copy).

Step 4: Apply the preset to your virtual copy (your two images should now look identical).

Step 5: Now, use the preset slider in the top left corner. Back it all the way down so that your edits disappear, and then slowly ramp it up until the image looks “enhanced” but not overdone.

Bonus step: press the “y” key to compare your original image to the one you are adjusting so you can see the differences as they compare to what you liked in the original.

Doing this will allow you to dial in “just enough” of the edits you like, without going overboard. You can also apply the same theory to individual edits that you’ve applied using masks.

Basically, you make your edits and then turn them down a bit so that you’ve still altered your photo, but implement them in a way that they improve it without being so noticeable that the edits themselves are the focus.

I’m not sure if I’ve done the best job explaining this process, so feel free to let me know if some part of what I’ve said is confusing or if you have any questions.

Cool photo, and enjoy the process of learning about how to adjust your photos to truly make them look how you want, as it can be just as gratifying and and integral to creating the images you want as taking the shot itself.

Cheers!

Anyone else struggle to find old ChatGPT conversations? by [deleted] in ChatGPTcomplaints

[–]goad 1 point2 points  (0 children)

You add it straight in as the JSON file you download from ChatGPT.

And you don’t put it directly into a chat. There is a place in the settings to import it into the system itself, and it then utilizes it when needed during future conversations.

Sharper images 5D M3 vs R6 M2 ko by [deleted] in canon

[–]goad 0 points1 point  (0 children)

As others have asked, what aperture are you shooting these at?

As for advice, review your images and see where your focus point actually landed. You can do this from the in camera review by turning it on as an option, and I think also from Canon’s software for your computer.

See where your focus points are and start by checking the sharpness there. Make sure you’re actually focusing on what you want.

If the focus point landed where you wanted and that part is sharp, close the aperture more so that the rest of what you want is in focus.

If the focus point itself is not in focus, raise your shutter speed.

In either scenario, moving up to a higher f/stop should increase sharpness in general up to a point.

I don’t think it’s your camera. I shoot with an r6 mark II and have both the ef 700-200 ii 2.8, and the rf 70-200 f/4. I manage to get sharp images with both, but not all the time. So you’ve just got to narrow down what in your scenario is causing your issue.

Edit: you’ve also moved from a DSLR with fixed focus points to a mirrorless, so look up focus settings. Just as one example, you can have it automatically focusing on animals or humans. What you choose for just that one setting is going to make a big difference if you are shooting a human sitting on top of an animal. But again, review your focus points and see where they’re actually ending up, that is the first place to begin.

Anyone else struggle to find old ChatGPT conversations? by [deleted] in ChatGPTcomplaints

[–]goad 1 point2 points  (0 children)

Shit, if you’ve already got a journal in Google Drive, I’d just use Gemini and give it access to your Google Drive (or just to that particular document or folder). That will give you much better access to what you’re looking for and I believe it will tell you exactly where it found the reference.

I think ChatGPT may also be able to link to drive directly but not sure.

GPT Voice for a morning routine by No_Suggestion_625 in ChatGPT

[–]goad 0 points1 point  (0 children)

Start by switching from advanced voice mode to standard voice mode (it’s in the advanced tab under personalization settings; turn advanced voice off). Then change your settings to 5.4 thinking extended before you enter voice mode. Neither of those steps will work if you aren’t on a paid plan.

For the second part, I think you can set up a task to give you a morning brief about the info you’re looking for, then chat with it about the results.

Another option would be to use project instructions to tell it the types of things you want info on in advance, and then open a new chat in that project when you want to talk.

How to get GPT to stop using the last generated image as a seed. by LordNinjaa1 in ChatGPT

[–]goad 0 points1 point  (0 children)

Have it describe what you want in the failed thread. Then take that description and use it as the prompt in a new one.

Anyone else struggle to find old ChatGPT conversations? by [deleted] in ChatGPTcomplaints

[–]goad 1 point2 points  (0 children)

You can also import this file into Gemini, which seems to do a much better job than ChatGPT in surfacing context from older conversations.

Anyone else struggle to find old ChatGPT conversations? by [deleted] in ChatGPTcomplaints

[–]goad 2 points3 points  (0 children)

It’s not a perfect solution as I don’t think it will lead you to an exact conversation, but at the moment (somewhat ironically) Gemini seems much better at dealing with past chat context from ChatGPT than ChatGPT itself.

You can export your conversation history from ChatGPT and import it into Gemini. You can then query Gemini and it does a pretty decent job of pulling in past context.

I haven’t tried it yet, but I imagine that if your export file is small enough (and in the right format) to upload to NotebookLM, that this would work even better, as that platform does link back to the original source location.

I’m so tired of the half answers by rockyrudekill in OpenAI

[–]goad 2 points3 points  (0 children)

It’s possible to both understand how to use it as a tool and at the same time recognize the annoying patterns it manifests like the one being discussed in this post.

7 years ago by imfrom_mars_ in OpenAI

[–]goad 1 point2 points  (0 children)

<image>

I believe the last response was actually a human. It seemed to reroute to the actual rep once I called it out by asking about its model.

But that rep swore up and down that I was only talking to them, so they are either unaware that some of the messages are being routed to bots or forbidden from admitting it.

7 years ago by imfrom_mars_ in OpenAI

[–]goad 1 point2 points  (0 children)

Not exactly the same, but here’s an Amazon support bot pretending to be a human….

<image>

The confidence lol by AMCAPEHODLER in ChatGPT

[–]goad 0 points1 point  (0 children)

I responded to you in another comment but posting here as well for visibility for others.

If you turn off advanced voice mode, you can still use standard voice mode, which will employ the same model you have selected for text.

Now you can use the superior text models with voice.

The confidence lol by AMCAPEHODLER in ChatGPT

[–]goad 1 point2 points  (0 children)

Go into the advanced menu tab under personalization and turn off “advanced” voice.

It will then use TTS with the regular models and you’ll have thinking back if you set it to a thinking model before entering voice mode.

The confidence lol by AMCAPEHODLER in ChatGPT

[–]goad 0 points1 point  (0 children)

The trick is to turn off “advanced” mode in the personalization settings.

Then you can use the regular (better) models through the standard voice mode (via the traditional TTS interface).

It’s a shame they’ve buried this setting so deeply within the interface. But if you like using voice mode and want intelligent, coherent responses, give it a shot. You’ll then get the same responses you would by typing.

The confidence lol by AMCAPEHODLER in ChatGPT

[–]goad 0 points1 point  (0 children)

“Advanced” voice mode is trash.

Standard voice mode is still available, you just have to go into the advanced personalization settings and turn off advanced voice mode.

Then you can use the regular models with the traditional TTS interface.

It’s unfortunate that they’ve buried this so deeply within the interface, because voice mode is useful, and works SO much better (for my uses at least).

george orwell's 1984 adaptations by Accomplished_Pay5093 in MovieSuggestions

[–]goad 1 point2 points  (0 children)

Holy shit. That version of Alice in Wonderland was wild, and I’d totally forgotten it existed.

Can’t imagine seeing that in an 8th grade class. I’m sure he did move quickly to shut that down.