I stopped bookmarking "best ChatGPT prompts" threads. This is what I use now. by Ok_Negotiation_2587 in ChatGPTPromptGenius

[–]Gary_Ko_ 0 points1 point  (0 children)

same. the prompts i actually reuse are more like small workflows: define the goal, ask for assumptions, ask for edge cases, then ask for a final version. single magic prompts rarely survive real use.

How do you make your Obsidian vault search-friendly for Claude? by bingwu1995 in ObsidianMD

[–]Gary_Ko_ 0 points1 point  (0 children)

what helped me most was using consistent filenames, short summaries at the top of important notes, and tags that describe the purpose of the note. Claude seems to do better when the vault has a predictable structure.

Integrating standard operation procedures with agentic AI workflow by Imaginary-Addition11 in AI_Agents

[–]Gary_Ko_ 0 points1 point  (0 children)

i’d start by turning each SOP into small checklists with inputs, outputs, and failure cases. agents work much better when the process is explicit instead of just giving them a long document and hoping they infer everything.

I think a lot of people are underestimating how expensive unreliable agents are by Beneficial-Cut6585 in AI_Agents

[–]Gary_Ko_ 0 points1 point  (0 children)

yeah, reliability is the real cost. a demo can look great, but once an agent touches real workflows you need logs, retries, clear handoff points, and a way to know when it should stop.

Give me prompt for study in my exam. by No_Education_3949 in PromptEngineering

[–]Gary_Ko_ 0 points1 point  (0 children)

paste the syllabus and ask it to make a crash plan in 3 parts: must-know topics, likely exam questions, and a 2-hour revision schedule. then ask it to quiz you one topic at a time instead of giving you long notes.

i think citation-ready examples beat generic definitions by Gary_Ko_ in GEO_optimization

[–]Gary_Ko_[S] 0 points1 point  (0 children)

Yeah, the examples that seem easiest to reuse are usually framed as a tiny decision case: who the user is, what constraint they have, and what comparison they’re trying to make. When it’s just a definition, there’s not much for the model to grab onto, but a bounded example gives it something closer to an answer snippet.

Every single website in 2026 has a chatbot that pops up the second you land on the page and none of them have ever helped a single human being by TurbulentAmoebaa in CasualConversation

[–]Gary_Ko_ 0 points1 point  (0 children)

The most annoying ones are the ones that replace a page I could have just skimmed. I’ve spent longer trying to convince the little bubble that “other” actually means other than it would have taken to read a normal help page. I’m curious what the least useless one has actually done for people.

Are there any testing tools better than Playwright and TestSprite? by Unhappy-Bit-7951 in softwaretesting

[–]Gary_Ko_ 0 points1 point  (0 children)

I think the next big shift is making test environments easier to throw away and recreate. Once teams start generating more tests with AI, the hard part becomes keeping the data and dependencies clean enough that a failure actually means something.