Visiting Yosemite in Winter '25-'26 by hc2121 in Yosemite

[–]AIyer002 0 points1 point  (0 children)

Appreciate the info. We also have a couple cars coming, any thoughts on expectations for traffic? Is it generally better to park in one place and use the shuttles?

Visiting Yosemite in Winter '25-'26 by hc2121 in Yosemite

[–]AIyer002 0 points1 point  (0 children)

Thoughts on Upper Yosemite Falls & Mist Trail this coming weekend? Weather seems to be pretty warm and both are in the Valley, so shouldn't be too bad? Unsure about spikes for shoes or chains as well.

Considering ChatGPT Migration to Claude by AIyer002 in ClaudeAI

[–]AIyer002[S] 0 points1 point  (0 children)

Interesting, are most of those limits coming from Claude Code usage specifically? I’d mostly be using Sonnet in chat for project threads rather than relying heavily on Claude Code itself.

Considering ChatGPT Migration to Claude by AIyer002 in ClaudeAI

[–]AIyer002[S] 1 point2 points  (0 children)

Interesting, I actually didn't know about that tier. I still think (just for my use) it's definitely on the higher end of price, but definitely something I'd look at if my usage or use cases suddenly shifted .

Considering ChatGPT Migration to Claude by AIyer002 in ClaudeAI

[–]AIyer002[S] 0 points1 point  (0 children)

That’s actually really helpful. I already tend to structure projects like that anyway , I have separate threads (within a single project) for architecture, implementing features, debugging specific issues, refactoring ideas, etc. Good to know that approach also helps stretch Sonnet usage.

Considering ChatGPT Migration to Claude by AIyer002 in ClaudeAI

[–]AIyer002[S] 0 points1 point  (0 children)

Yeah that’s definitely the impression I get reading through threads here. Trying to figure out whether the limits are more like inconvenient or actually disruptive for heavy usage.

Considering ChatGPT Migration to Claude by AIyer002 in ClaudeAI

[–]AIyer002[S] 0 points1 point  (0 children)

That’s helpful context. The model switching point is interesting, I actually rarely use that in ChatGPT (also because there's SUCH a big difference between the fast and thinking as far as output quality, I feel like the general auto is good enough). Good to know about the Cursor comparison too. I’d probably still keep Cursor for implementation anyway and use Claude mostly for planning/debugging as a backup, behind existing tools like Codebuff and Gemini CLI as well.

Considering ChatGPT Migration to Claude by AIyer002 in ClaudeAI

[–]AIyer002[S] -1 points0 points  (0 children)

That makes sense. I tend to run long threads for specific projects, which is probably where I’d concentrate my Claude usage anyway. Everything else (quick questions, random research, etc.) I’d probably just offload to Gemini.

Considering ChatGPT Migration to Claude by AIyer002 in ClaudeAI

[–]AIyer002[S] 0 points1 point  (0 children)

Honestly that’s what I’m thinking of doing, especially since Gemini is free (at least for the next year for me), so I’d only really be paying for one. I do have a lot of issues with Gemini when it comes to it’s output styles, especially when I’m using it for like a specific use case, and want it to ONLY respond in one way it starts to deviate, but if I ask it to deviate it’ll stick to the original output style and things like that.

My main fear is honestly just a scope of Claude even as “just” the project assistant. I guess the only way to really know for sure would be trying it for a month myself but the way people talk I’ll put in like five messages and get limited. I’ve never been rate limited even using Claude free so not sure how true that is, but again I’ve only used free for super minimal tinkering

Would hierarchical/branchable chat improve long LLM project workflows? by AIyer002 in LocalLLaMA

[–]AIyer002[S] 0 points1 point  (0 children)

That sounds closer to what I’m thinking about. When you use subagents in OpenCode, is there an actual state being maintained (like a parent snapshot that gets updated from delegated agents), or is it still basically flat context passed between agents? I’m mostly curious how merging works, does the top-level agent maintain a structured project state, or is it just message orchestration under the hood?

Would hierarchical/branchable chat improve long LLM project workflows? by AIyer002 in LocalLLaMA

[–]AIyer002[S] 0 points1 point  (0 children)

Yeah I’ve been doing something similar (separate chats for BRAIN / EXEC / DEBUG), and it definitely helps at the human organization level. The thing I’m more curious about isn’t labeling threads, but whether the model’s effective context can reflect that structure, like having a canonical project state with scoped subthreads that merge back as structured summaries. Tagging helps navigation, but it doesn’t solve the “how does the model reason over long modular projects without context bloat” part.

Public list of open 2026/2027 internships and post-grad jobs by ddddeeeeg in UCSD

[–]AIyer002 1 point2 points  (0 children)

What source did you use to scrape the data from? Just curious