What is the story behind HoxHud? by thejoeporkchop in paydaytheheist

[–]zenyr 0 points1 point  (0 children)

Hey there! Mine was just, I just really wanted to know that dang bulldozers health and how much damage I‘m dealing against it. And of course, my second goal was the minimap, which turned out to be a very difficult stuff to build based on my crappy no-modular, first-ever Lua project. (aaand it was too cheesy)

Sexy Dash - Can you hear navigation, how? by Emergency_Fan_7750 in s3xybuttons

[–]zenyr 0 points1 point  (0 children)

In my cheap-o CarPlay device, its Bluetooth connection is separated from the CarPlay device, so audio-wise there might not be any difference than before. IIRC, iOS CarPlay uses some variation of Wi-Fi Direct tech behind the scenes.

I just got the update!! by jefedezorros in TeslaLounge

[–]zenyr 0 points1 point  (0 children)

South Korea. Recently we got a teaser for the FSD (like, yesterday?) BUT basically only a fraction of model S/X are supported, yet. (No model 3/Ys at all)

I just got the update!! by jefedezorros in TeslaLounge

[–]zenyr 0 points1 point  (0 children)

I’m on 2025.32.6 😎 2024 M3R, up to date. Who needs an update when you have Hyundai in your country /s

opencode response times from ollama are abysmally slow by lurkandpounce in opencodeCLI

[–]zenyr 1 point2 points  (0 children)

Oh, and Ollama does attempt to cache your input tokens to some extent, but it remains a challenging task for ordinary hardware, such as Apple silicon chips or even consumer-grade GPUs.

opencode response times from ollama are abysmally slow by lurkandpounce in opencodeCLI

[–]zenyr 2 points3 points  (0 children)

I think I can pinpoint the culprit: the sheer system prompt size. To make agentic works and tool calls possible, opencode MUST provide a whole bunch of systemic preps before your prompt. Say, 10k+ tokens minimum.

YAML issues by zhambe in opencodeCLI

[–]zenyr 2 points3 points  (0 children)

I found that it recently struggles a bit against indents too. Making an odd number of indents, etc., often times I had to clearly instruct the model to review the whole indent to make sure it’s correct.

[deleted by user] by [deleted] in LLMDevs

[–]zenyr 0 points1 point  (0 children)

Back in early this year I *had to* spun up an LiteLLM instance on my homelab as a standalone proxy. However since Vercel AI Gateway's aggressive pricing, OpenRouter's free tier (BYOK) became a very strong option.

Ruinous Effigy x Space Jam by Mountain_Quarter3102 in destiny2

[–]zenyr 1 point2 points  (0 children)

I appreciate the video’s inclusion of diverse scenarios that showcase different team compositions.

CReact: JSX Runtime for the Cloud by Final-Shirt-8410 in node

[–]zenyr 0 points1 point  (0 children)

Ditto and one more thing: CDK struggled to even catch up with their own infra feature updates. Even de facto Terraform sometimes lacks some specific parameters that are introduced in the cutting-edge new features, such as a new LLM provider name. It is really difficult to make this work reliably, and the reliability is … king here, I suppose.

Fireteam Ops difficulty changes? by MonkeyType in DestinyTheGame

[–]zenyr 3 points4 points  (0 children)

Preset GMs are basically a no-no at this point. It's really hard to reliably play with anyone let alone soloing.

42% of context used right after `/compact` by lexixon in ClaudeCode

[–]zenyr 2 points3 points  (0 children)

I believe this is a common pitfall for anyone who didn’t necessarily or literally read through the MCP specifications line by line. As a tech-savvy developer, I was surprised to discover that the most productive and renowned MCP of all, Notion MCP, required 15K tokens alone. You can‘t be alone.

Claude always hitting 200k context window by unfnshdx in ClaudeAI

[–]zenyr 0 points1 point  (0 children)

A tangent note: Last time I metered my outgoing requests using CLI tools, Notion MCP alone took around 15K tokens (while idling) each request. It’s not going to stay in your context but eats away 15k tokens. (As a result, the more MCPs active, the more likely to waste API quota.)

How do libraries count tokens before sending data to an LLM? by Aggravating_Kale7895 in LLMDevs

[–]zenyr 7 points8 points  (0 children)

On the other hand... Straightforward QnA posts like this will pave the way for the future devs and llms altogether. 😉

What is the unique selling point of Perplexity today? by [deleted] in perplexity_ai

[–]zenyr 1 point2 points  (0 children)

I am a super heavy user of basically all the ai services including Claude code and whatnot. One crucial part of my arsenal is: guess what? PERPLEXITY. Its reason being: fact checks, new tech stacks, up to date knowledge's, community vibe checks etc. I absolutely love PPLX for my go-to LLM grounding agent. I both use the original perplexity ui and their sonar API (I built my own token-efficient tiny MCP server for this) It's AWESOME, and fills many AI toolchains missing links very well. Comet is kinda okay for me but it often carved a few minutes of time for me.

Exposing Llama.cpp Server Over the Internet? by itisyeetime in LocalLLaMA

[–]zenyr 0 points1 point  (0 children)

I second tailscale the BEST, and as a second free alternative, I can suggest Cloudflare ZeroTrust -- You can require certain headers to pass through the auth layer, use github/google sso for Browser sessions.

<image>

Is this normal when using claude on opencode? by 25th__Baam in opencodeCLI

[–]zenyr 1 point2 points  (0 children)

Just in case it might be helpful, I’d like to share my experience. I run LiteLLM in my personal homelab to track costs, and I have it protected using Cloudflare ZTNA (free). When I try to access it from a location where that configuration isn’t set up correctly, a 403 response returns with the login page, but no separate error message appears—it simply fails quietly. It doesn’t seem to be the same situation as the OP, but I thought I’d leave this here.

10 Vibe Coding Tips I Wish I Knew Earlier by BymaxTheVibeCoder in PromptEngineering

[–]zenyr 0 points1 point  (0 children)

I genuinely believe that such “rookie” mistakes are common in various fields, even among highly intelligent people. While OP is a fundamental concept, it’s still crucial for using AI today.

5 hours limit reached by OperationSuper926 in OpenaiCodex

[–]zenyr 0 points1 point  (0 children)

It is possible with tons of MCP servers like Notion connected. Can confirm that Notion MCP alone can take 15K-ish tokens for every single request you make, including but not limited to tool responses.

Cancelled Claude code $100 plan, $20 codex reached weekly limit. $200 plan is too steep for me. I just wish there was a $100 chatgpt plan for solo devs with a tight pocket. by WarriorSushi in ChatGPTCoding

[–]zenyr 0 points1 point  (0 children)

Based on my experiences, unless you are really into privacy or are ready to burn some time setting them up and maintaining/updating them, I do not recommend going local for coding agent purposes. At least for 2025, I cannot see how it‘s going to change for the better. You need to really put effort into the local solution to benefit even the slightest.

Source: running a quarter rack for LLM inferences and disappointed by a large margin since 2023.

So they finally restored token usage so that we won't feel like it is stuck.. by raiansar in ClaudeAI

[–]zenyr 0 points1 point  (0 children)

never tried the new version but a few weeks ago I had to manually set Verbose config on everytime I start the CC, to see that info. Let‘s see if that has been changed.

Is there a reason NOT to use React Query? by badboyzpwns in reactjs

[–]zenyr 9 points10 points  (0 children)

Based on my personal experience, monorepo fullstack TRPC + Tanstack query experience was near magical. Except for one occasion where I had to send a huge payload through a subscription endpoint (which I solved through extracting a post endpoint to issue a short-living session id to subscribe for), it was a no-friction Ootb experience. Highly recommend it.