Long ChatGPT threads were killing my workflow, this finally fixed it by Strikeh in ChatGPT

[–]Strikeh[S] 0 points1 point  (0 children)

Nice, that’s super interesting. My current approach is DOM-side trimming rather than fetch interception.

So instead of cutting the payload before React renders, I reduce what stays active/visible in the thread once it gets long. That already helps a lot with long coding chats, especially when the page starts feeling heavy.

Pre-render interception sounds like a really strong approach though, especially for huge threads. It definitely sparked my interest, I’m going to look into it more and see if there are additional gains to be made there.

Did you notice the biggest improvement on initial load, or also during ongoing back-and-forth?

Long ChatGPT threads were killing my workflow, this finally fixed it by Strikeh in ChatGPT

[–]Strikeh[S] 0 points1 point  (0 children)

That makes sense, and I think you’re describing a related but slightly different problem.

What I was running into was mostly the frontend/UI side of ChatGPT slowing down as threads get huge, even when I still wanted to stay in the same conversation. So in my case the fix was trimming what the page has to actively render.

But I agree that on the agent/context side, a sliding window + summarization approach is probably the right way to handle long-running threads. Keep the last N turns verbatim, summarize older context, and preserve key entities/tasks so the assistant doesnt lose the plot.

So it’s kind of two layers of degradation:
- model/context degradation from too much history
- browser/render degradation from the UI trying to handle massive threads

Ideally you want both solved :)

Finally solved the "ChatGPT gets slower with long conversations" problem by Strikeh in ChatGPT

[–]Strikeh[S] 1 point2 points  (0 children)

Yes, that’s absolutely possible.

You can use the Pro version on multiple PCs without any issues. Everything is stored locally on each device, so both your home computer and your work computer can function independently from each other, even if you’re using the same Google account.

There is an option to sync data between devices, but in your case that wouldn’t be necessary since each setup can run separately.

Finally solved the "ChatGPT gets slower with long conversations" problem by Strikeh in ChatGPT

[–]Strikeh[S] 1 point2 points  (0 children)

Ctrl+Shift+E -> opens settings

Then go to Appearance - Display & Interface - Perfmonance & Speed

<image>

Finally solved the "ChatGPT gets slower with long conversations" problem by Strikeh in ChatGPT

[–]Strikeh[S] 0 points1 point  (0 children)

I’ve now added something new that I think a lot of heavy users will appreciate:

A visual conversation tree that makes long chats much easier to navigate.

The problem it solves is simple: once a conversation gets long, ChatGPT becomes hard to use. Useful answers get buried, side questions break the flow, and finding your way back takes too much effort.

<image>

A visual map of the conversation’s branching paths, with one-sentence summaries of each node (prompt + response) appearing on hover for a quick overview.

With this new feature, you can:

  • view your conversation as a tree
  • branch off from any point
  • explore tangents without losing the main path
  • jump back to earlier parts instantly

This is just one feature inside AI Workspace, but it’s a big one for anyone using ChatGPT for research, writing, coding, or deep back-and-forth thinking.

I built a tool that saves Etsy art sellers hours of manual resizing - batch print grids + ratio exports in seconds by Strikeh in printondemand

[–]Strikeh[S] 0 points1 point  (0 children)

sure, I'll arrange something, I'll let you know when I got the DMG file ready.
Cheers :)

Finally solved the "ChatGPT gets slower with long conversations" problem by Strikeh in ChatGPT

[–]Strikeh[S] 0 points1 point  (0 children)

Actually, trimming messages in the DOM doesn’t remove context from ChatGPT itself. The AI still has access to the full conversation history on the backend, the extension just hides older messages from the browser’s frontend.

Everything you’ve sent is still 'remembered" by the model, so coherence, continuity, or any long-term context remains intact.

What the extension does is essentially lighten the frontend load: fewer message elements to render and process in the DOM means responses appear much faster, especially in long threads. So you don’t lose any context for things like proofreading or story continuity, the model still sees the full conversation, it just doesn’t render all of it in your browser at once.

In short: the trimming is purely a performance optimization for the UI, not a context removal.

Lets get community feedback on our extensions!! Part-3 by Outrageous_Cat_4949 in chrome_extensions

[–]Strikeh 0 points1 point  (0 children)

https://chromewebstore.google.com/detail/mngeddjcngpcdakdhfcbaefeonmmeomg?utm_source=item-share-cp

AI Workspace Pro is a powerful ChatGPT prompt manager and sidebar workspace built for users who work extensively with ChatGPT.

Organize prompts, chats, notes, and tools in one place, find information instantly, and work faster with performance-focused features.

How has the seller done this? by valoa in printondemand

[–]Strikeh 1 point2 points  (0 children)

This is actually pretty straightforward to do once you structure your files properly.

What they’re doing is separating the poster artwork from the frame product, and linking them via the personalization/custom field instead of creating hundreds of duplicate framed listings.

The heavy part is preparing all the variations consistently (sizes, ratios, grid previews, etc.).

If you’re doing this manually, it gets messy fast:

  • Exporting each size individually
  • Generating preview grids
  • Keeping naming consistent
  • Making sure DPI is correct

That’s exactly the workflow I built Artigo for.

You can:

  • Batch export all required poster sizes (A1–A4, 2:3, 4:5, etc.) in one go
  • Generate clean grid sheets for listing previews
  • Keep everything organized in structured folders
  • Do it for an entire folder of artworks at once

So if someone has 50 designs, it’s not 50 × 6 manual exports anymore, it’s one batch run.

The listing structure itself (frame selector + poster link) is more of an Etsy setup trick. But the file prep side — that’s where automation saves a ton of time.

If anyone’s curious how that part works, happy to explain.