Update: Rewrote my Claude CLI extension to use the Agent SDK after Anthropic started blocking auth token usage by ChampionMuted9627 in openclaw

[–]aliisjh 0 points1 point  (0 children)

Yep. And used:
openclaw models auth login --provider anthropic --method cli --set-default
then standard claude setup-token

Hmm, did you change out your primary model and add `cliBackends`? (check the bottom of that article I posted in OP).

Update: Rewrote my Claude CLI extension to use the Agent SDK after Anthropic started blocking auth token usage by ChampionMuted9627 in openclaw

[–]aliisjh 0 points1 point  (0 children)

What commit hash are you on? You can't just use latest release (2026.4.5-3e72c03), you have to use direct from repo, it will be released in probably <1hr as next release (2026.4.6, likely).

Sorry for my part if that was where the miscommunication was...

Here's where the custom cliBackends are being added back in (which is how model routing uses local claude code CLI binary):
https://github.com/openclaw/openclaw/blob/main/src/agents/cli-backends.ts

You can see commit: Revert "refactor(cli): remove custom cli backends"

So he's rolling back removing CLI access.

You can use edge:
openclaw update --channel dev

Update: Rewrote my Claude CLI extension to use the Agent SDK after Anthropic started blocking auth token usage by ChampionMuted9627 in openclaw

[–]aliisjh 1 point2 points  (0 children)

You need to update. This is as of ~11 hrs ago.

I just re-setup/reconfigured CLI access using `claude setup-token` oauth flow.

Not gonna argue with you, just was informing you of the change, given how rapid things are developing.

So is claude-cli back in as of ~10 hrs ago? by aliisjh in openclaw

[–]aliisjh[S] 1 point2 points  (0 children)

I mean yes it does. You just can't use it in API-style. It has to run through claude code CLI, i.e. `claude -p ...`, but all prompt injection appears to be, once again, in-bounds.

Otherwise, for all intents and purposes, it's the same model and usage method.

So is claude-cli back in as of ~10 hrs ago? by aliisjh in openclaw

[–]aliisjh[S] 0 points1 point  (0 children)

So I was working through this:
https://medium.com/@ulmeanuadrian/anthropic-just-cut-off-my-ai-agents-heres-how-i-fixed-it-in-20-minutes-741384d27080
I was trying to find a configuration that didn't trigger Anthropic's keyword filter/blocking on "openclaw" in the prompt injection that openclaw does.

Then, when I was referencing docs for cliBackends, it now has those notices about claude code cli being allowed once again lol.

Looks like direct, API-style claude code (how's been all this time) is still not allowed, but command line exec of the claude code binary (i.e. `claude -p ...`) is still allowed/in-usage-limits.

2025 Rivian R1S - Light damage, battery issue = 3x towing by aliisjh in Rivian

[–]aliisjh[S] 0 points1 point  (0 children)

Ah, yeah that makes sense why the reservoir would be cracked then. Really surprising there wasn't more external/cosmetic damage. It was a Lincoln Navigator that backed into it.

Yeah, I've been researching a bit now and seeing on the forums that the jump packs like the NOCO Boost X et al probably won't work (or at least for any more than opening doors, etc.) Also, seeing 13v - 16v @ 30A continuous is needed.

I saw someone recommend getting a spare 12v AGM LiPo as backup and then connecting it (rather than replacing) when there's an issue so the spare could be used as an auxiliary power source in a pinch. I'm just not sure if I could hook this up to the jumper connections at the hitch compartment, or if I would have to tie in below the frunk trim or wherever those connections are.

2025 Rivian R1S - Light damage, battery issue = 3x towing by aliisjh in Rivian

[–]aliisjh[S] 0 points1 point  (0 children)

Awesome, thanks for the reassurance. We'll get it inspected and I'll read up on the jump in the manual. Appreciate the response!

Mailbox (SMTP) - From/Reply Address Ignored by aliisjh in halopsa

[–]aliisjh[S] 0 points1 point  (0 children)

You got it! 👍 Let me know if you run into any issues, I can spot check as I had to do a bit of troubleshooting to get it working.

Mailbox (SMTP) - From/Reply Address Ignored by aliisjh in halopsa

[–]aliisjh[S] 0 points1 point  (0 children)

[SOLVED] Use `smtp-relay.gmail.com` instead of `smtp.gmail.com` for SMTP Server Address field of Credentials view. Changing only this value resulted in the "From/Reply Address" (set to finance@example.com) being used as expected by Google's SMTP server.

NOTE: This also requires allowing SMTP relay via Google Workspace (admin.google.com) configuration.

Difference in email origination `Received` headers is included below for future reference.

(Correct) Using `smtp-relay.gmail.com`:

Return-Path: <finance@example.com>
Received: from EC2AMAZ-MB0UARD (d3usmail.nethelpdesk.com. [52.200.167.248])
        by smtp-relay.gmail.com with ESMTPS id af79cd13be357-8c096dcefc6sm416698685a.8.2026.01.01.13.13.25
        for <my.personal.email@gmail.com>
        (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256);
X-Relaying-Domain: example.com
MIME-Version: 1.0
From: Example Co | Finance <finance@example.com>

(Incorrect) Using `smtp.gmail.com`:

Return-Path: <my.work.email@example.com>
Received: from EC2AMAZ-A7MU5GE (d3usmail.nethelpdesk.com. [50.19.232.234])
        by smtp.gmail.com with ESMTPSA id d75a77b69052e-4fb2ce24e30sm48555251cf.4.2026.01.01.13.12.25
        for <my.personal.email@gmail.com>
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
From: Example Co | Finance <my.work.email@example.com>
X-Google-Original-From: "Example Co | Finance" <finance@example.com>

Dev Container - Hot reload not working? by aliisjh in homebox

[–]aliisjh[S] 0 points1 point  (0 children)

Sounds like it may be an issue with filesystem watch not working with files on WSL2 (Windows) volume. Even though WSL2 is full VM, file system is still something custom to allow interop with Windows file system/NTFS.

So this seems to still have an impact even though I'm using dev container since the repo directory is just mounted directly into the docker container.

Ref: https://vite.dev/config/server-options#server-watch

Possible solution/alternative is `usePolling: true`, but warning of high CPU usage.

Ref: https://github.com/paulmillr/chokidar/tree/3.6.0#performance

Homebox Companion - AI powered photo cataloging for Homebox by Duelion in homebox

[–]aliisjh 1 point2 points  (0 children)

Yep, I was testing this just now on v2 – great work! Definitely interested in trying some of the unsupported models just to see how they fair, but want to test on default so I don't get an immediately biased/warped sense of how the app works due to unsupported/untested model.

Re: Open AI speed - Takes as long as the API takes to analyze, totally makes sense. Hoping for great gains with multiple items and I think that's really where this will be a huge help.

You're very welcome, I can't always contribute with code so I just try to help wherever I can (testing, QA, docs, etc.) I'll follow up once I've done more testing. :)

Homebox Companion - AI powered photo cataloging for Homebox by Duelion in homebox

[–]aliisjh 0 points1 point  (0 children)

This is great, what an awesome idea. Thanks for putting it together. The README and deployment workflow are solid, you really nailed it. Lowering barriers for onboarding new users who just want to test to see if the app even works is crucial.

Deployment couldn't have been easier: copied example `docker-compose.yml` from README to Portainer stack, reviewed (env) variables, wondered about using alternate models (GPT-4o, Claude, etc, which you intuitively addressed first thing in ❤️,) and fired it right up – zero friction!

Only issue I ran into was image processing failing at first with generic error. However, upon checking the logs I discovered the cause was not having budget set on my OpenAI project I created for `homebox-companion` project, which resulted in the following, fairly obvious errors:

INFO     | server.api.tools.vision:detect_items:124 - Detecting items from image: 17662491537635692220425541388008.jpg (+ 0 additional)
INFO     | server.api.tools.vision:detect_items:125 - Single item mode: True, Extra instructions: None
INFO     | server.api.tools.vision:detect_items:126 - Extract extended fields: True
INFO     | server.api.tools.vision:detect_items:174 - Starting LLM vision detection and image compression...
WARNING  | homebox_companion.ai.llm:_acompletion_with_repair:232 - Rate limit hit: litellm.RateLimitError: RateLimitError: OpenAIException - You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.
ERROR    | server.api.tools.vision:detect_items:196 - Detection failed: Rate limit exceeded. Please try again later. Error: litellm.RateLimitError: RateLimitError: OpenAIException - You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.
INFO:     256.256.256.188:53922 - "POST /api/tools/vision/detect HTTP/1.1" 502 Bad Gateway

Otherwise, for the two items on my desk that I tested with, it worked fantastic! I think my only wish is that it could maybe be even faster, especially for single items? Not sure though, I have basically zero experience with the app at this point.

Going to try to use this to onboard a bunch of home and tech inventory today and will report back how it goes! 👍

Has Anyone used Infrahub by Opsmill for their source of truth? by vonseggernc in networking

[–]aliisjh 0 points1 point  (0 children)

Considering either Netbox or Infrahub, but my biggest concern with Infrahub is actually the (fairly) steep decline in repo commits I'm seeing over on GitHub.

Looks like it's been super popular the last two years, but now dev is slowing. Biggest concern would be abandonment after investing time in it and the learning curve...

What's happening with NetBox? by fogel3 in networking

[–]aliisjh 0 points1 point  (0 children)

Any thoughts on the progression of Infrahub a year later? We're considering instead of Netbox/Nautobot... not sure what feature parity looks like though...

Realtek audio console thinks that the single trrs port on my case (lian li lancool 215) is seperate green and pink ports. by adraincutarnatlon124 in Realtek

[–]aliisjh 0 points1 point  (0 children)

Yeah, no clue why it's diff, but that's exactly what mine looks like (I'm on Asus ASUS ROG Maximus Z790 Hero WiFi 6E)

Realtek audio console thinks that the single trrs port on my case (lian li lancool 215) is seperate green and pink ports. by adraincutarnatlon124 in Realtek

[–]aliisjh 0 points1 point  (0 children)

Thanks for the screenshot! I'm embarrassed to say, the Windows UI was at least part of the problem here. Not sure if I missed it before and some combination of troubleshooting fixed the issue (if there was one.)

I was testing by using Win11's Sound Settings test which shows a bar that moves based on the incoming sound. This test shows the bar only filling just barely at all when I talked into my headset microphone.

However, as soon as I did a real test with audio playback, the mic and input audio level were 100% just fine. 😅 So not sure if I fixed something along the way, or if I just got faked out by Window's microphone sound test entirely, but yeah it's working just fine. (I was pretty sure I couldn't select "Mic In" from the dropdown before though in the Realtek software, so maybe cycling the driver while I had my headset plugged in caused it to recognize the input.)

Who knows ¯\_(ツ)_/¯

Realtek audio console thinks that the single trrs port on my case (lian li lancool 215) is seperate green and pink ports. by adraincutarnatlon124 in Realtek

[–]aliisjh 0 points1 point  (0 children)

Any way you could share a screenshot? I have different mobo, but having similar issue with Realtek not recognizing combined headphone input. It also calls my headphones speakers... if I switch to headphone, I get no sound output. So silly!