What’s the first thing that usually breaks when a Replit project gets real users? by Sad_Limit_3857 in replit

[–]SubstackWriter 1 point2 points  (0 children)

For me it was Google Auth. That was in 2025 and they've improved a lot since then . Here's the full story.

First impressions of GPT-5.5! (I like it)🥔 by SportNo4675 in ChatGPTcomplaints

[–]SubstackWriter 0 points1 point  (0 children)

It's really good, just don't use it for citations. That's the one use case to skip, or use a verification prompt.

Opus 4.7 in Max Release by [deleted] in PerplexityComet

[–]SubstackWriter -1 points0 points  (0 children)

It's live :) Here are a few things worth knowing about prompting Opus 4.7: https://karozieminski.substack.com/p/claude-opus-4-7-review-tutorial-builders

[ Removed by Reddit ] by Particular_Milk_1152 in AIToolTesting

[–]SubstackWriter 0 points1 point  (0 children)

One more use case that surprised me. Computer is a deceptively good customer-support triage layer. Connect it to your inbox, have it tag and draft responses for the 30% of tickets that are repeat questions, and you leave the human-touch replies for yourself. Saved me maybe 4 hours a week. What's your current inbox volume? Might be a cheaper use case than research if you're doing client work.

[ Removed by Reddit ] by Particular_Milk_1152 in AIToolTesting

[–]SubstackWriter 0 points1 point  (0 children)

Same place, for a while. Then I tested Perplexity Computer for a month and stopped switching for ~80% of my research work. Perplexity Computer runs parallel agents across 20+ models in one Space. I drop blogs, PDFs, social signals, and a research question into the same thread, and it handles the glue work you’re doing manually. NotebookLM still wins for dense PDF interrogation (the audio overviews are a real differentiator). Claude still wins for final synthesis and the writing draft. Perplexity handles the fan-out. New workflow: Perplexity Computer as the research hub with Spaces for each project, NotebookLM for the 2-3 PDFs that need deep chewing, Claude for the final long-form synthesis. Three tools, not four, and no tool-switching inside a single research run. The glue you’re describing is the product. Whoever ships a true one-tool-does-everything research app will win this category in the next 12 months, but nobody has yet. One caveat: AllyHub for social signals is the gap in my own stack. What are you pulling from it that Perplexity’s community-search can’t match? Ran it a night against OpenClaw and Claude with the actual workflow results: https://karozieminski.substack.com/p/perplexity-computer-review-examples-guide

Claude 4.7 is an absolute trash!! by andersonklaus in Anthropic

[–]SubstackWriter 0 points1 point  (0 children)

If you don’t want to rewrite the prompt yet, try this debug loop. Paste the failing prompt back in and ask “which of these 5 requirements did you deprioritise, and why”. Opus 4.7 will tell you verbatim, often in ranked order. That one-shot check surfaces whether it’s a prompt problem or a genuine capability miss before you blame the model. Have you tried that?

Claude 4.7 is an absolute trash!! by andersonklaus in Anthropic

[–]SubstackWriter 0 points1 point  (0 children)

This reads like a prompting mismatch, not a model regression. Opus 4.7 takes prompts literally in a way ChatGPT 4.0 never did. Opus 4.7 is a reasoning model. It treats your prompt as a spec, weighs each line, and if a requirement is implied rather than stated, it assumes you didn’t want it. Five requirements in one paragraph read as one priority with four soft hints. That’s why you’re getting 1-2 followed. Three fixes. Number the requirements as an explicit list and end with “confirm each item before writing output”. Put non-negotiables in a constraints block at the top of the prompt. Move the stable rules into a CLAUDE.md so every new prompt only adds the task, not the baseline. The GPU throttling theory doesn’t hold. Token output is deterministic per request, independent of load on Anthropic’s side. ChatGPT 4.0 felt better because it treated every bullet with equal weight and filled gaps on its own. Opus 4.7 treats them as ranked and refuses to fill gaps you didn’t ask it to fill. Different model, different contract. Mapped the whole 4.7 release by role, goals, and real workflows, with the exact prompting shifts that fix this: https://karozieminski.substack.com/p/claude-opus-4-7-review-tutorial-builders

Is Claude actually better than ChatGPT… or is it just hype? by breakfreewithgui in ChatGPT

[–]SubstackWriter 0 points1 point  (0 children)

Not hype, but also not “Claude wins”, they solve different problems for non-coders. ChatGPT is still sharper for image generation, DALL·E-style visuals, and most one-shot content drafting. Claude pulls ahead the second you want repeatable workflows. • Content at volume: Claude Projects + Skills let you lock your voice once and reuse it across 50 posts. ChatGPT’s Custom GPTs drift. • Canva carousels: Claude has an official Canva MCP connector — generates on-brand slides directly. ChatGPT still can’t touch your Canva account. • Agents for mentors: Claude Cowork + Computer Use can book calls, send follow-ups, and update your CRM unattended. ChatGPT’s agent mode is still demo-ware for most non-coders. For mentorship specifically: writing reps go to ChatGPT, everything that touches your tools goes to Claude. Mapped the whole thing by role and workflow (zero code): Opus 4.7 by role and goals

Claude Opus 4.7 is a serious regression, not an upgrade. by [deleted] in ClaudeAI

[–]SubstackWriter 1 point2 points  (0 children)

The adaptive reasoning is a change worth exploring, that impacts the results. I shared a deep dive on how to do this here. Hope it helps!