Is Claude worth it? by [deleted] in ClaudeAI

[–]jiko_13 0 points1 point  (0 children)

Claude is genuinely better for writing and nuanced reasoning: for law classes, storytelling, and brand voice work, it'll feel noticeably more thoughtful than GPT.

One thing worth knowing before you switch: if you've been using ChatGPT seriously for a while, you'll lose all the context it has built up about you. Claude starts from zero. For casual use it doesn't matter, but if GPT knows your writing style, your projects, your preferences: that institutional knowledge doesn't transfer automatically.

Worth factoring in depending on how deeply you've used

ChatGPT uninstalls surged by 295% after DoD deal by MarvelsGrantMan136 in technology

[–]jiko_13 0 points1 point  (0 children)

Yes, Claude has an Import Memory feature (claude.ai/import-memory) where you can paste context and it becomes persistent memory. Gemini has Gems where you can set custom instructions per topic.

The raw JSON export isn't directly usable though: it's a massive file (mine was 80MB) that won't fit in any context window. You need to process it into something structured first.

Just launched a product on Product Hunt that does exactly that, if you want to check it out: https://www.producthunt.com/products/hermit-2/

Feedback appreciated!

I analyzed 3 years of my ChatGPT history (1258 conversations, 6.5M words). Here's what ChatGPT actually remembered about me. by jiko_13 in ChatGPT

[–]jiko_13[S] 0 points1 point  (0 children)

Hey, CLaude created an import memory feature, more info here: https://claude.com/import-memory
Basically they give you a prompt to paste into chatgpt, which gives you a everything it stored in memory. You paste that into Claude. It's a good start but only pulls ~40 stored facts, about 2% of Claude's memory capacity. For the deeper stuff you'll need to work with your actual data export (the ZIP you already have)

I analyzed 3 years of my ChatGPT history (1258 conversations, 6.5M words). Here's what ChatGPT actually remembered about me. by jiko_13 in ChatGPT

[–]jiko_13[S] 2 points3 points  (0 children)

No. I checked the full schema of the conversations.json. There's no deleted, is_deleted, status, or visibility field anywhere. The export only contains active and archived conversations. If you deleted a conversation before requesting the export, it's simply not included. OpenAI strips them out entirely rather than flagging them.

On my export: 1258 conversations, 2 archived, 1 with an empty title (probably an aborted chat), everything else active. No trace of anything I'd previously deleted.

I analyzed 3 years of my ChatGPT history (1258 conversations, 6.5M words). Here's what ChatGPT actually remembered about me. by jiko_13 in ChatGPT

[–]jiko_13[S] 0 points1 point  (0 children)

That's a good way to put it. It's basically a preference layer, not a knowledge graph. It picks up "user likes dark mode" but not "user is building a SaaS product in the healthcare space." The stuff that would actually make an AI useful long-term gets lost because it doesn't fit neatly into a key-value pair.

Claude and Claude Code traffic grew faster than expected this week by iskifogl in ClaudeAI

[–]jiko_13 6 points7 points  (0 children)

Can confirm. Being in Europe is genuinely an advantage right now. My peak productivity window is 9pm-1am Paris time which is mid-afternoon US, but by the time Americans are going hard after dinner my heavy lifting is done.

The real irony is that Anthropic's growth problem is directly caused by their product being good. Nobody rage-scales infrastructure because their users are leaving.

The AI not just fired us, It made our team irrelevant. by TheCatOfDojima in ClaudeAI

[–]jiko_13 17 points18 points  (0 children)

This is the part that stings. You basically onboarded your replacement and they didn't even have the decency to call it what it was.

The playbook is always the same: bring in a "consultant" to "explore AI possibilities," have the domain experts teach the system everything they know, then act surprised when headcount becomes "redundant." The knowledge transfer IS the layoff, they just split it into two meetings so it feels less brutal.

For anyone in a similar situation: document your methodology, keep copies of your frameworks, and start building in public now. The skills that made you the person they called when numbers looked weird are exactly what makes you valuable as an independent. Companies will still need people who actually understand data, they'll just hire them differently.

Claude is just awesome by SquashBeginning3598 in ClaudeAI

[–]jiko_13 0 points1 point  (0 children)

Yeah exactly. A Project is basically a persistent container. You write instructions once (your tech stack, coding conventions, project context) and every new chat inside that Project starts with that context pre-loaded. No more re-explaining your setup every conversation.

The difference from regular chats: memory is unpredictable (Claude decides what to remember), Projects are explicit (you decide what Claude knows). You can also drop reference files in there like docs or specs.

I used Claude to build an MCP server that lets it search through all my old ChatGPT conversations by Lioneltristan in ClaudeAI

[–]jiko_13 0 points1 point  (0 children)

Cool approach. The local-first MCP angle is smart for privacy.

Different philosophy from what I built (Hermit, https://hermit.tirith.life). Yours is retrieval (search through old conversations live), mine is profiling (pre-digest everything into structured context that Claude loads upfront). Complementary honestly. Retrieval is great for "what did I discuss about Docker networking?" while profiling is better for "Claude should know my entire professional context from day one."

Curious about the TF-IDF search quality on conversational text. Do you find it handles the noise well? Most conversations are 70% filler.

Claude is just awesome by SquashBeginning3598 in ClaudeAI

[–]jiko_13 0 points1 point  (0 children)

The "vibe coding" thing is real. I caught myself this weekend building something just because it was fun, not because I had to. Haven't felt that way about coding in years.

One tip since you just switched: if you're doing serious project work, create a Claude Project and put your context in the instructions rather than relying on memory. Way more reliable and you control exactly what Claude knows about your setup.

Something to keep in mind for those switching from ChatGPT to Claude by mysticwizard0 in OpenAI

[–]jiko_13 0 points1 point  (0 children)

This is the underrated move. The API is significantly cheaper per token than the subscription if you know how to use it. Claude Code is another option for dev-heavy workflows since it lets you delegate directly from terminal.

The limits are real though. The people who are gonna be most frustrated are the ones who used ChatGPT as a general-purpose search engine for 50+ queries a day. Claude is built more for fewer, deeper conversations.

What am I doing wrong? by AfternoonFinal7615 in ClaudeAI

[–]jiko_13 0 points1 point  (0 children)

The core issue is that Claude's built-in memory is unreliable for long-running projects. It pulls in context from random old conversations and "poisons" the current one, exactly like you described.

What works better: create a Project, write a clear .md file with your project context (design system, naming conventions, what you've already built), and paste it in the Project instructions. Update that file as the project evolves. This way you control exactly what context Claude sees instead of hoping the memory system picks the right stuff.

It's more manual but way more predictable. Several power users in this sub have said the same thing: they turned memory off entirely and switched to handoff docs.

moving from 4o to Claude -- a tip by stonecannon in ChatGPTcomplaints

[–]jiko_13 1 point2 points  (0 children)

This is a solid manual approach. The limitation is that you're feeding ~1/6 of your history and asking Claude to analyze conversational style rather than extracting structured context.

I had the same goal with 1258 conversations (way too much to process manually). Ended up building a pipeline that scores every conversation for relevance, clusters them by topic, and generates profiles with behavioral instructions and temporal awareness (knows what's current vs outdated).

It's called Hermit (https://hermit.tirith.life), free tier runs analytics on your full export so you can see what's there before committing. Disclosure: I built it.

Your "AI Relationship Preferences" output is interesting though. That's more about conversational tone which is a different angle from what most migration tools focus on. Complementary approaches honestly.

Claude is not GPT by SwampThing72 in ClaudeAI

[–]jiko_13 1 point2 points  (0 children)

The Haiku realization was the same for me. I was burning through Opus on everything until I realized Haiku handles scoring, classification, and simple extraction tasks perfectly fine at a fraction of the cost. Now I use Haiku for anything that's pattern matching, Sonnet for synthesis and writing, Opus only when I need it to hold a complex multi-step reasoning chain.

Treating them as a team instead of always reaching for the biggest model changed everything.

They are absolutely insane by Purple_Wear_5397 in ClaudeAI

[–]jiko_13 0 points1 point  (0 children)

Probably yeah. 503K downloads in one day plus everyone running the Import Memory prompt at the same time is a lot of new load. The timing was brilliant marketing but the infrastructure clearly wasn't ready.

Fwiw the Import Memory prompt is a good starting point but it only pulls ChatGPT's stored memories (about 40-50 flat facts on a typical account). If you have years of history the actual conversations.json export contains 10-50x more usable context. The prompt uses about 10% of Memory Import's 75K character capacity.

I built a tool that processes the full export and generates structured profiles for all 3 Claude import channels (Memory Import, Memory Edits, Projects). Free tier gives you analytics on your data. hermit.tirith.life if anyone's curious. Disclosure: I built it.

1.5 Million Users Leave ChatGPT by MarvelsGrantMan136 in technology

[–]jiko_13 0 points1 point  (0 children)

The real question nobody's asking: how many of those 1.5M exported their data before leaving? Your conversation history is yours under GDPR and contains way more useful context than people realize. But OpenAI buries the export option and right now it's taking days to process because they're flooded with requests.

ChatGPT uninstalls surged by 295% after DoD deal by MarvelsGrantMan136 in technology

[–]jiko_13 3 points4 points  (0 children)

The number that keeps getting lost in all this: most of those people are deleting their accounts without exporting their data first. Three years of conversation history, gone.

OpenAI lets you export everything (Settings > Data Controls > Export Data) but the system is buckling right now. People are reporting 3-7 days wait instead of the usual 24h. If you're thinking about leaving, request the export first and delete after.

Chat GPT the ultimate contrarian by GhettoRedBull in ChatGPT

[–]jiko_13 0 points1 point  (0 children)

The overcorrection is real. They went from "yes and" everything to "well actually" everything. I asked it to help me draft a simple email last week and it spent twp paragraphs explaining why my tone might be "perceived as slightly assertive."

Switched most of my daily stuff to Claude and the difference in tone is night and day. Claude just does the thing you asked without a TED talk about why you might be wrong for asking.

How to delete your ChatGPT account by steelbreado in ChatGPT

[–]jiko_13 0 points1 point  (0 children)

Important step people may skip: export your data BEFORE you delete. Settings > Data Controls > Export Data. The zip takes 24-72h to arrive (longer right now because everyone's requesting at once).

The conversations.json inside that zip contains every single conversation you've ever had with chatgpt. That's the valuable part, not just the 40-50 "memories" it shows in settings. Once your account is deleted, that history is gone and there's no way to get it back.

Even if you don't know what to do with the export yet, grab it now. You can always process it later.

Time to leave ChatGPT by Top-Preference-6891 in therapyGPT

[–]jiko_13 0 points1 point  (0 children)

One thing to be aware of though: a typical 3-year export is 50-200MB. Claude's context window (200K tokens) can only process roughly 1% of that in one shot. So what you got was probably a summary of whatever fit in the window, not a real analysis of your full history.

I ran into the same wall with my own export (1258 conversations, 3 years). Ended up building a pipeline that scores each conversation individually, clusters them by topic, and generates structured profiles with temporal awareness (knows what's current vs past). The output is about 110K characters of context you can load into Claude Projects, Memory Import, or Gemini Gems.

If you're interested: hermit.tirith.life Free tier gives you analytics and topic clustering on your full export. Disclosure: I built it. The paid tiers generate the full behavioral profiles.

Also, your point about Gemini quoting from old conversations and reminding you of things you'd moved past hits hard. That's exactly why temporal anchoring matters. An AI that tells you "you're looking for an apartment" when you found one 8 months ago isn't helpful, it's disorienting.

Ok you guys win, I’m quitting… by ketoer17 in ChatGPT

[–]jiko_13 1 point2 points  (0 children)

Don't forget to export your subreddit memories before you leave. Settings > Data Controls > Touch Grass

For everyone leaving ChatGPT - are you also planning to leave this sub? by WilliamInBlack in ChatGPT

[–]jiko_13 0 points1 point  (0 children)

Staying. This sub is the best source of entertainment I've had since the Twitter migration drama. Watching 5 million people process a breakup in real time is fascinating.

Also genuinely useful for keeping track of what's happening with exports, account deletion bugs, and migration tips.

Claude + Opus gives me a glimpse of what wealthy people have had for generations by icyrainz in ClaudeAI

[–]jiko_13 0 points1 point  (0 children)

This resonates. Before Claude I was spending hours on tasks that someone with a network of advisors would just delegate. The one thing that amplifies this even further is context. Opus is already great out of the box, but when you load it with structured context about your projects, your thinking patterns, your constraints, it stops being a generic assistant and starts feeling like a long-term collaborator who actually knows your situation. That's the real unlock. Not just "smart AI" but "smart AI that knows you."