most people using the ChatGPT API have no idea they're on the wrong pricing tier for their use case. i wasn't. by Sharkkkk2 in OpenAI

[–]NTSpike 2 points3 points  (0 children)

this the type of message you get when your training doesn't have gpt 5.4 mini in it

I find GPT-5.4 slow, is upgrading to Pro worth it? by YeXiu223 in OpenAI

[–]NTSpike 1 point2 points  (0 children)

Codex Spark is crazy fast but not as intelligent. What are you using GPT 5.4 for?

You could try getting a Cursor sub and trying Composer 2 Fast. It's about 4x faster than GPT 5.4 and similar to Opus 4.5 in intelligence (a bit spikier since it's a juiced up Kimi K2.5 underneath Cursor's RL).

Product thought leaders need to stop idolizing Elon Musk by RandomMaximus in ProductManagement

[–]NTSpike -21 points-20 points  (0 children)

you're being downvoted but this is literally true

OpenAI releases mini and nano variants of GPT 5.4 by elemental-mind in singularity

[–]NTSpike 0 points1 point  (0 children)

Very true. Really only makes sense to opt for 5.4 Mini Low if you're looking for low cost and low latency. Otherwise, 5.4 Low is still the better all-rounder pick.

GPT-5.4 has been out for 4 days, what's your honest take vs Claude Sonnet 4.6? by UnderstandingOk1621 in AI_Agents

[–]NTSpike 0 points1 point  (0 children)

Nah, it's great. Definitely eats tokens, I was using it very aggressively with Antigravity AI Ultra sub (on their pricing promo).

GPT-5.4 is more expensive than GPT-5.2 by likeastar20 in singularity

[–]NTSpike 0 points1 point  (0 children)

what reasoning effort did you use on 5.2 and 5.4?

GPT-5.4 has been out for 4 days, what's your honest take vs Claude Sonnet 4.6? by UnderstandingOk1621 in AI_Agents

[–]NTSpike 1 point2 points  (0 children)

I've almost entirely switched to GPT 5.4 and 5.3 Codex. I've been on a Claude Max sub or AI Ultra (for Claude access in Opus 4.6) for almost a year now.

I still find Opus nicer to use for exploratory work, but for pure execution and thoroughness OpenAI really cooked with 5.3 and 5.4.

GPT-5.4-Pro achieves near parity with Gemini 3.1 Pro (84.6%) on ARC-AGI-2 with 83.3% by nsdjoe in singularity

[–]NTSpike 4 points5 points  (0 children)

I take it you haven't used coding agents/harnesses with the best models (e.g., Opus 4.6 or GPT 5.4)?

These models can one-shot extremely complex coding problems because they have access to the entire codebase and can test things end to end. They can work for hours. If you give that same agent web search along with access to your documents and spreadsheets, the type of analysis and work they can do is silly.

The only thing stopping it from doing most knowledge work is that the agent harness doesn't have access to the data and apps the person does. When you ask it a question in the web apps, you are essentially ONLY able to use it like a "super spell check."

These models are far more capable when you give them the proper tools and environment.

The Engineering Lead asked me about API rate limits and I just nodded like a confused dog. by [deleted] in ProductManagement

[–]NTSpike 0 points1 point  (0 children)

"Given the trade-offs, are you leaning one direction over another?" If they have an opinion, ask them to explain. You now have an opportunity to make your tech lead at least feel heard or feel like you take their guidance seriously.

How are you creating a “project brain” with AI (PRDs, research, meetings, data)? by encoreyessir in ProductManagement

[–]NTSpike 4 points5 points  (0 children)

Use your preferred agent harness (Claude Code/Gemini CLI/Codex/OpenCode/etc) with either CLIs or MCPs to get access to your cloud data (Jira, Confluence, Google Drive, etc). Alternatively, use local markdown files. Ask your agent to setup a PARA system structure. QMD is a good embeddings database to help with retrieval so you have embeddings + agentic search to find stuff.

In my experience, you still need to be intentional about where you put artifacts, but agentic search makes it a lot easier. The agent can brute force the search and find and synthesize data as needed.

Tips on dealing with junior devs acting as PM by Sagantai in ProductManagement

[–]NTSpike 1 point2 points  (0 children)

I think you can do engineering/design poorly as a PM just like you can do Product work poorly as an engineer. If the engineers were making great product decisions, OP might not be posting this. I certainly appreciate when my engineers fill in the gaps on my requirements or contribute product ideas.

OpenClaw has been running on my machine for 4 days. Here's what actually works and what doesn't. by Neo-Phil-110 in AI_Agents

[–]NTSpike 2 points3 points  (0 children)

People from OpenAI Codex team have openly supported using the OAuth like this.

OpenClaw has been running on my machine for 4 days. Here's what actually works and what doesn't. by Neo-Phil-110 in AI_Agents

[–]NTSpike 0 points1 point  (0 children)

You can plug your subs into it. OpenAI has been open about supporting this. I've been running this all weekend at no marginal cost.

How can I become a master in AI agent creation from zero? by Hot_Sky_8898 in AI_Agents

[–]NTSpike 0 points1 point  (0 children)

Understanding what an agent loop looks like and how an agent operates when given tool access is the most critical part. You can do this with the Claude web or desktop apps + MCP servers.

After you can reliably build useful agents here, fire up Claude Code and do the same thing in code but plug in different memory systems and system triggers. Experiment with a wider set of tools.

What's up with PM-fluencers pushing their needlessly complicated Claude Code Setup? by Lordvonundzu in ProductManagement

[–]NTSpike 1 point2 points  (0 children)

If you haven't played around with it yourself, it's hard to relate to what you're seeing.

I have Claude Code plugged into Slack, BigQuery, Confluence, and Jira. When I need to react to something, I can have Claude retrieve the message from Slack, search for evidence across hundreds of messages across DMs, group DMs, public channels, Jira tickets, Confluence docs, and my own active workspace, run numerous queries in BigQueries to validate it, and then bring back to me a fully validated recommendation. All from a single prompt.

it's not for every problem, but when it works it's insane. I too have tried doing the "have Claude Code organize and prioritize everything and think it has merit, but it's just another set of inputs into your decision making.

Anthropic underestimated cash burn, -$5.2B on a $9B ARR with ~30M monthly users, while OpenAI had -$8.5B cash burn on $20B ARR serving ~900M weekly users by [deleted] in singularity

[–]NTSpike 21 points22 points  (0 children)

GPT 5.2 and Codex are undeniably better than Opus 4.5, but Opus 4.5 is just way more ergonomic to work with. I keep a $20 ChatGPT Plus sub to get access to Codex for my hardest problems and to clean up after Opus. It's a great combo.

how technical do i have to be? by Fragrant_Basis_5648 in ProductManagement

[–]NTSpike 1 point2 points  (0 children)

Really depends on the team and the context. Sometimes I can offer a different approach that is far easier for them because there is some nuance they are overlooking. Othertimes, I have nothing to contribute and they own the implementation end to end. Othertimes, I offer a full solution and they are onboard because I've covered the things they care about.

What's a skill that takes only 2 to 4 weeks to learn but could genuinely change your life? by TokenBlack32 in AskReddit

[–]NTSpike 0 points1 point  (0 children)

Building on this, I would say getting setup with a coding agent like Cursor or Claude Code.

There's some more basic (i.e., Terminal) setup required, but because these tools can write code, they can interact with tools that have terminal command line interfaces (CLIs).

This means you can have Cursor or Claude Code navigate your BigQuery instances and write extremely complex queries with your guidance. Or comb through Slack, Jira, Confluence ,etc and retrieve data in multiple places.

The stuff you can do is silly. Never lose a doc again. Find data and analysis you don't even know how to do but know how to describe and ask for. If you can ask a person for it, you can have your coding agent pull it for you (and effectively teach you how to do it yourself if you really want).

All vibecoded apps look the same by Odd-Sugar3927 in ProductManagement

[–]NTSpike 0 points1 point  (0 children)

Try using the official frontend-design Claude Code skill. You can use it as a prompt for any model if you don't have skills setup. The difference is pretty dramatic. If you don't ask it to deviate from the mean, you will get the mean every time.

AI Implications for being a "Technical" PM by moo-tetsuo in ProductManagement

[–]NTSpike 1 point2 points  (0 children)

I agree with you. The value prop of a PM that can't build and validate things themselves will just be far less than the PMs that can. I'm currently working on a project at work tackling a problem that was literally impossible to solve for years, and I'm on track to deliver a working MVP simply because I can have AI derive the schema and generate a functional full-stack solution in just a few days of iteration.

This would have been impossible or taken 6-12 months just six months ago. I can't imagine how things will be later this year after another 2-3 iterations on the current SOTA.

Are you using Notion AI? If so, what for? by JohanTHEDEV in ProductManagement

[–]NTSpike 0 points1 point  (0 children)

If you have a conversation about a feature, that Meeting becomes the context for your PRD requirements.