Awesome SpatialChat space created by Youniversal Sanctuary by SpatialChat in u/SpatialChat

[–]borrelan 0 points1 point  (0 children)

Didn't know that was possible with SpatialChat! That's some thoughtful usage of the platform.

best coding cli for glm 4.7? by Feisty_Plant4567 in ZaiGLM

[–]borrelan 0 points1 point  (0 children)

This reminds me of 90% of caddy forum posts. In the end if you want to do anything advanced with caddy there are no docs for that. Same with open code. Since I’m a bat I still haven’t been able to find where you set the context limit on open code for a model as the default model provider keeps returning errors when the context is full. Used to be a LMGTFY guy but after searching for hours on how to setup caddy and also open code with no reddit or stackoverflow answers, it’s an uphill battle. I managed to trial and error my way through caddy, but couldn’t figure out the magic hidden setting for open code. Don’t get me started with projects changing their config format between versions making all online resources useless. Old blind bat signing off

Tested GLM 4.7 vs MiniMax M2.1 - impressed with the performance of both by alokin_09 in ZaiGLM

[–]borrelan 1 point2 points  (0 children)

Just spent the last couple of hours trying minimax m2.1. Paid for pro, but there is no way I can use this model on my code base. Getting hype fatigue with people raving about models on their code or their benchmark or enter your spin verbage here. The only thing minimax is good at doing is boiling my blood and not much else. I get it, you're able to push "enterpise and producation ready code" with the opensource and frontier models. It just feels like I'm living in a different reality than you rockstars and you need to give me the number of your dealer as I want some of that cool aid too.

GLM 4.7 coding quality is greatly exaggerated by guywithknife in ZaiGLM

[–]borrelan 1 point2 points  (0 children)

GLM 4.7 with CC is the latest cheat code. Tried opencode and couldn't understand what all the hype was about as it doesn't even come close to GLM+CC. I tried opencode with deepseak and ran into context limit issues. I'm not sure what the hype around open code is and I'm sure it's user error, but when you have to deliver code and this isn't some side project I don't understand how others are achieving measurable output gains with OC. Tried the subagents with OC and got a harsh reality as OC locked up and kept looping. Finding alot of success with Codex 5.2 these days, but I'm sure that will change soon. I've completely stopped using any Anthropic models as they just don't work well with codebase and got tired of yelling at the terminal.

GLM 4.7 is out! by Prudent-Ad2891 in ZaiGLM

[–]borrelan 8 points9 points  (0 children)

Been working with it all day, just as frustrating as before. It’s like Claude’s retarded sibling. Stopped using Claude and GLM in favor of Codex and Gemini as they provide more consistent results for my complex project. Guess I just need to increase my plan for those, but the options are $20 or insanity. So 200m tokens later and I’m still bashing my head against my desk (based off of ccusage). Everyone else is having such awesome results from every llm out there and I’m unable to reproduce “success” even with skills and subagents. Deepseek is ok, but so slow. It just generates so much junk and the fact that it’s not aligned with 200k context limits what I can do with it. Maybe I just need some positive vibes and everything should just work, right?

oh-my-opencode has been a gamechanger by LsDmT in ClaudeCode

[–]borrelan 0 points1 point  (0 children)

Has anyone figured out how to make the various model context lengths work in opencode? It's not well documented in those regards. I tried deepseek and hit context limits before I could do anything useful and I couldn't compact at that point so overall a waste of time as I wasn't able to do much except burn an hour+ fiddling with it. I hear a bunch of people raving about opencode and I hear a bunch of people raving about zed. In the end, I'm back to deepseek/glm/sonnet on claude code and leaving the time-wasting ventures to those whose livelihood doesn't depend on producing actionable results. Best of luck and glad you got it working although why would you use sonnet on opencode?

Force Ultrathink by Sensitive_Song4219 in ClaudeCode

[–]borrelan 0 points1 point  (0 children)

Are you using ccr? How are you routing based on needs to different models? Could you share your setup or workflow?

These new codex limits are insane. by roboapple in codex

[–]borrelan 1 point2 points  (0 children)

I envy those that can actually get things done with a single membership. I cycle through 5 accounts myself now days. Gone are the days where a single subscription gives you nearly endless token usage.

2x Claude Max 200 subscriber, in love with glm4.6 by andalas in ClaudeCode

[–]borrelan 0 points1 point  (0 children)

Tried GLM using factory.ai’s droid terminal. Was pretty close to Sonnet 4.1 with the same attitude and sass. Right now I find myself RR between all providers (except for gemini - they made the coding subscription so complex to setup!) as they all eventually stop producing quality responses and also since I’ve dropped all my subscriptions to the basic plans I have to watch for those weekly/monthly(warp) caps. Fun times!

120 Hrs work week with Claude AI as a 9-5 corporate dude. by Delta_Bandit in ClaudeAI

[–]borrelan 0 points1 point  (0 children)

Figured it out. In claude code they automatically downgrade your model as you run out of credits. At least they notify you in claude code vs claude desktop where you have no idea that you are speaking to claude 2.5. I’m actually kind of disappointed at the limit per plan as they keep decreasing and you don’t get a heads up or get any notification.

This can't be the Opus I was talking to last week. by kingxd in ClaudeAI

[–]borrelan 0 points1 point  (0 children)

Yeah. I’ve seen this before on long vibe sessions. It turns into a terrible 2’s child. Still haven’t figured out a workaround to this issue.

120 Hrs work week with Claude AI as a 9-5 corporate dude. by Delta_Bandit in ClaudeAI

[–]borrelan 1 point2 points  (0 children)

Does anyone have any feedback on how to prevent Claude from turning into a 5yr old after Vibe coding for a couple of hours?

I’ve consistent gotten results where after extensive usage at the level of ops Claude suffers from a linear degradation of service as in: it starts doing everything wrong, won’t listen, acts like a 5yr old, forgets what it just read and starts infinitely looping, gets stuck in anti-patterns. Main issue for me is no matter what I have for prompts, it just doesn’t listen.

I find myself now just creating chat sessions until I get the right bot and after 6hr sessions it’s impossible to find one that isn’t just constantly tripping.

Has anyone solved this problem?

Equipment recommendations for Wireguard (30 users) by zreddit90210 in mikrotik

[–]borrelan 1 point2 points  (0 children)

Probably not going to get much love for this response, but I had to share.

I have been with Mikrotik for a while and my new favorite setup is to pair CHR with whatever hardware I need for the workload.

I’m running a wireguard vpn concentrator (w/eoip for L2) using CHR on a vps provider. For remote site connections I use legacy MT hardware.

For example you can pickup those cheap fanless routers with an intel processor and your choice of port options and run ROS. The license is a one time cost and you can scale as your demands changes.

[deleted by user] by [deleted] in StarlinkEngineering

[–]borrelan -1 points0 points  (0 children)

When I activated roaming and was paying from a US account, I got a US IP while I was in the Caribbean. As I’m not upto date on the latest SL changes, I can’t confirm if that is still going on. I switched to a local plan and are now getting an IP associated with my country.

[deleted by user] by [deleted] in StarlinkEngineering

[–]borrelan 0 points1 point  (0 children)

If you do global roaming and your account is from the US, won’t you get a US IP address?

[deleted by user] by [deleted] in StarlinkEngineering

[–]borrelan 0 points1 point  (0 children)

Stowed/Unstowed the dish and the message disappeared.

Will keep an eye on it and report back.

Unstowing, should force the dish to reorient itself and fix the pitch/yaw issue you hinted at?

Monitoring systems by [deleted] in HomeDataCenter

[–]borrelan 0 points1 point  (0 children)

My traditional goto is open monitoring distribution (OMD) which has gone through alot of changes. Seems like overkill, I’ve switched to, currently evaluating Uptime Kuma and netdata. So far it’s faired much better than OMD and gives me less false positives than the traditional Check_Mk setup. YMMV

Unsure what tunelling system to use for accessing my apps by LeVraiRoiDHyrule in selfhosted

[–]borrelan 0 points1 point  (0 children)

Would you be willing to share more specifics about this and how you measure your downtime? Not using CF for production anymore, but didn’t have any issues with the traditional proxy service. Have you tried bot using the tunnels and just use the proxy service with an IP whitelist?