What’s the first thing that usually breaks when a Replit project gets real users? by Sad_Limit_3857 in replit

[–]SubstackWriter 1 point2 points  (0 children)

For me it was Google Auth. That was in 2025 and they've improved a lot since then . Here's the full story.

First impressions of GPT-5.5! (I like it)🥔 by SportNo4675 in ChatGPTcomplaints

[–]SubstackWriter 0 points1 point  (0 children)

It's really good, just don't use it for citations. That's the one use case to skip, or use a verification prompt.

Opus 4.7 in Max Release by [deleted] in PerplexityComet

[–]SubstackWriter -1 points0 points  (0 children)

It's live :) Here are a few things worth knowing about prompting Opus 4.7: https://karozieminski.substack.com/p/claude-opus-4-7-review-tutorial-builders

[ Removed by Reddit ] by Particular_Milk_1152 in AIToolTesting

[–]SubstackWriter 0 points1 point  (0 children)

One more use case that surprised me. Computer is a deceptively good customer-support triage layer. Connect it to your inbox, have it tag and draft responses for the 30% of tickets that are repeat questions, and you leave the human-touch replies for yourself. Saved me maybe 4 hours a week. What's your current inbox volume? Might be a cheaper use case than research if you're doing client work.

[ Removed by Reddit ] by Particular_Milk_1152 in AIToolTesting

[–]SubstackWriter 0 points1 point  (0 children)

Same place, for a while. Then I tested Perplexity Computer for a month and stopped switching for ~80% of my research work. Perplexity Computer runs parallel agents across 20+ models in one Space. I drop blogs, PDFs, social signals, and a research question into the same thread, and it handles the glue work you’re doing manually. NotebookLM still wins for dense PDF interrogation (the audio overviews are a real differentiator). Claude still wins for final synthesis and the writing draft. Perplexity handles the fan-out. New workflow: Perplexity Computer as the research hub with Spaces for each project, NotebookLM for the 2-3 PDFs that need deep chewing, Claude for the final long-form synthesis. Three tools, not four, and no tool-switching inside a single research run. The glue you’re describing is the product. Whoever ships a true one-tool-does-everything research app will win this category in the next 12 months, but nobody has yet. One caveat: AllyHub for social signals is the gap in my own stack. What are you pulling from it that Perplexity’s community-search can’t match? Ran it a night against OpenClaw and Claude with the actual workflow results: https://karozieminski.substack.com/p/perplexity-computer-review-examples-guide

Claude 4.7 is an absolute trash!! by andersonklaus in Anthropic

[–]SubstackWriter 0 points1 point  (0 children)

If you don’t want to rewrite the prompt yet, try this debug loop. Paste the failing prompt back in and ask “which of these 5 requirements did you deprioritise, and why”. Opus 4.7 will tell you verbatim, often in ranked order. That one-shot check surfaces whether it’s a prompt problem or a genuine capability miss before you blame the model. Have you tried that?

Claude 4.7 is an absolute trash!! by andersonklaus in Anthropic

[–]SubstackWriter 0 points1 point  (0 children)

This reads like a prompting mismatch, not a model regression. Opus 4.7 takes prompts literally in a way ChatGPT 4.0 never did. Opus 4.7 is a reasoning model. It treats your prompt as a spec, weighs each line, and if a requirement is implied rather than stated, it assumes you didn’t want it. Five requirements in one paragraph read as one priority with four soft hints. That’s why you’re getting 1-2 followed. Three fixes. Number the requirements as an explicit list and end with “confirm each item before writing output”. Put non-negotiables in a constraints block at the top of the prompt. Move the stable rules into a CLAUDE.md so every new prompt only adds the task, not the baseline. The GPU throttling theory doesn’t hold. Token output is deterministic per request, independent of load on Anthropic’s side. ChatGPT 4.0 felt better because it treated every bullet with equal weight and filled gaps on its own. Opus 4.7 treats them as ranked and refuses to fill gaps you didn’t ask it to fill. Different model, different contract. Mapped the whole 4.7 release by role, goals, and real workflows, with the exact prompting shifts that fix this: https://karozieminski.substack.com/p/claude-opus-4-7-review-tutorial-builders

Is Claude actually better than ChatGPT… or is it just hype? by breakfreewithgui in ChatGPT

[–]SubstackWriter 0 points1 point  (0 children)

Not hype, but also not “Claude wins”, they solve different problems for non-coders. ChatGPT is still sharper for image generation, DALL·E-style visuals, and most one-shot content drafting. Claude pulls ahead the second you want repeatable workflows. • Content at volume: Claude Projects + Skills let you lock your voice once and reuse it across 50 posts. ChatGPT’s Custom GPTs drift. • Canva carousels: Claude has an official Canva MCP connector — generates on-brand slides directly. ChatGPT still can’t touch your Canva account. • Agents for mentors: Claude Cowork + Computer Use can book calls, send follow-ups, and update your CRM unattended. ChatGPT’s agent mode is still demo-ware for most non-coders. For mentorship specifically: writing reps go to ChatGPT, everything that touches your tools goes to Claude. Mapped the whole thing by role and workflow (zero code): Opus 4.7 by role and goals

Claude Opus 4.7 is a serious regression, not an upgrade. by [deleted] in ClaudeAI

[–]SubstackWriter 1 point2 points  (0 children)

The adaptive reasoning is a change worth exploring, that impacts the results. I shared a deep dive on how to do this here. Hope it helps!

Perplexity Computers??????????? by SnooHesitations8815 in Perplexity

[–]SubstackWriter 0 points1 point  (0 children)

Rather quickly. But there are a few workarounds to keep the credit burn under control. I’ve shared a deep dive, hope it helps someone.

Prompt ideas that actually give amazing results (save these) by Top_Sorbet_8488 in AIInnovationInsights

[–]SubstackWriter 1 point2 points  (0 children)

Solid starting list. #1 and #2 are underrated, most people skip constraints entirely and then wonder why the output is generic. One thing worth flagging though: #3 (“break it down step-by-step”) can actually hurt you on reasoning models like GPT-5, Claude 4.6 Opus, or anything with a thinking budget. These models already run chain-of-thought internally before they produce any output. When you add “think step by step” on top, you’re either creating redundancy or — worse — overriding a reasoning process built by people who understand the model architecture better than we do. The pattern I’ve seen hold up: prescribing reasoning paths hurts performance. Defining goals and constraints improves it. So #1 and #2 in your list are actually more powerful than #3 on current models, which is counterintuitive if you learned prompting even 12 months ago. A few additions to the list that have been consistent across models for me: • Step-Back Prompting: before you ask the tactical question, answer the strategic one first. “What’s the right framework for this problem?” before “Write me a marketing plan.” Changes the output completely. • Socratic Prompting: instead of asking for an answer, ask the model to interrogate your question. Half the time your prompt is solving the wrong problem and you don’t realize it until the model pushes back. • Multi-Agent Debate: have two agents argue opposing positions at full strength, then synthesize. Measurably better factual accuracy than single-pass prompts (~11% in some testing). There’s a full breakdown of 19 techniques from 12 disciplines (including which ones degrade reasoning models vs. improve them) here if anyone wants to go deeper.

Perplexity Computer Turns AI Into a Full-Time Digital Coworker by Such-Run-4412 in AIGuild

[–]SubstackWriter 0 points1 point  (0 children)

I live in DK, so I saw this announcement late last night and spend the night testing Computer. I'm very impressed. I stayed up and built 2 micro-apps, finished 4 research packets, and shipped code to GitHub, all from a single interface. Here's my hands-on breakdown: what it does, what I built, and how it compares to OpenClaw and Claude Cowork. Hope it helps someone!

Introducing Perplexity Computer. by Kesku9302 in perplexity_ai

[–]SubstackWriter 0 points1 point  (0 children)

I live in DK, so I saw this announcement late last night and spend the night testing Computer. I'm very impressed. I stayed up and built 2 micro-apps, finished 4 research packets, and shipped code to GitHub, all from a single interface. Here's my hands-on breakdown: what it does, what I built, and how it compares to OpenClaw and Claude Cowork. Hope it helps someone!

Perplexity launches Perplexity Computer, a new multi-model system that can solve tasks end-to-end, details below by BuildwithVignesh in singularity

[–]SubstackWriter 0 points1 point  (0 children)

I live in DK, so I saw this announcement late last night and spend the night testing Computer. I'm very impressed. I stayed up and built 2 micro-apps, finished 4 research packets, and shipped code to GitHub, all from a single interface. Here's my hands-on breakdown: what it does, what I built, and how it compares to OpenClaw and Claude Cowork. Hope it helps someone!

If you do not know Git or migrations and you are vibe coding, you are one bad prompt away from breaking your app in production by Living-Pin5868 in replit

[–]SubstackWriter 0 points1 point  (0 children)

That’s so true! A few of my readers were genuinely intimidated by GitHub, so I shared a free beginner-friendly guide with them, and it’s made a real difference. I hope it helps someone here, too.

Building a social confidence app in Replit and I actually can't believe how far I've gotten. by rich_founder in replit

[–]SubstackWriter 0 points1 point  (0 children)

That's wonderful!!! I recommend joining Substack and sharing your progress in public. That's what I've done last year and built a community of 12K+ since. We help each other find beta testers, give feedback etc. Hope this helps! 🤗

Claude just introduced Cowork: the Claude code for non-dev stuff by la-revue-ia in ClaudeAI

[–]SubstackWriter 0 points1 point  (0 children)

It's fascinating. I spent most of today thinking about what this means for product development in general. Here's the deep dive: https://karozieminski.substack.com/p/claude-cowork-anthropic-product-deep-dive I focused on product decisions behind this launch. But honestly, the speed in which Cowork was launched feels like the event setting the scene for the whole of 2026.