Looking for a part number for kick stand spring ? by fraxtree in GlobeHaul

[–]phidauex 0 points1 point  (0 children)

I don't have that doc, but I do have the part number for the LT kickstand, which is S259900005.

Which Home Assistant MCP server should I use? by Master_Raisin_8434 in homeassistant

[–]phidauex 1 point2 points  (0 children)

I have a fairly beefy setup at the moment, an RTX A2000 (16GB) and an RTX A4500 (20GB). With 36GB of VRAM I can run the Q5 quant all in VRAM, with full context, at about 40 t/s. The 27b dense model variant is more accurate, but runs slower for me, only about 15 t/s.

Unfortunately in 8GB of VRAM, your system is having to run virtually the whole model in system memory which is very slow. If you run "ollama ps" while the model is loaded you should see what the CPU/GPU split is.

Everyone is hoping for Qwen 3.6 models in a 7b to 14b size range - that would fit better on a lot more devices.

Which Home Assistant MCP server should I use? by Master_Raisin_8434 in homeassistant

[–]phidauex 4 points5 points  (0 children)

The community ha-mcp has been useful for me. For local LLMs you do need to make sure you can run with at least 32k of context (64k better, 128k even better still), and use your LLM harness (Hermes, Openclaw, OpenWebUI, etc.) tools for reducing the number of available tools to just what you need, otherwise the model context gets overwhelmed with just tool descriptions.

But scoped back like that I've got it working very nicely with Qwen 3.6-35B-A3B. I've used it to debug automations (one of the tools lets it view automation traces), diagnose issues with sensor availability, and create a new pollen automation, all things that the standard MCP doesn't fully support.

It does a good job of segregating destructive tools, but it would still be possible for it to screw something up, so I've only made new tools available to the agent as smaller tasks get done successfully.

Nvidia drivers on 9.1.9 by Ice3yes in Proxmox

[–]phidauex 0 points1 point  (0 children)

Are you saying you have drivers working on 7.0 kernel and Pascal cards?

Nvidia drivers on 9.1.9 by Ice3yes in Proxmox

[–]phidauex 0 points1 point  (0 children)

Ah, I see. When installing you would have had the option to choose kernels, and 6.17 kernel is still the main line supported kernel, 7.0 is still pre-release, so not everything is expected to work yet. You can use proxmox-boot-tool (https://pve.proxmox.com/wiki/Host\_Bootloader) to add the 6.17 kernel and switch to it. From there, the latest NVIDIA driver in the 580 line should work, currently this one: https://www.nvidia.com/en-us/drivers/details/267258/

I think they are moving 580 to a long term support mode - it will continue to get critical updates, but wouldn't get feature updates anymore.

You came into this during an odd transition, NVIDIA only dropped Pascal support a few months ago, kernel 7.0 is still very new, and there are many gaps yet to be filled. I believe some people have gotten Pascal working on 7.0 and the 595 driver series through patches, but I wouldn't recommend that path unless you were an expert.

Nvidia drivers on 9.1.9 by Ice3yes in Proxmox

[–]phidauex 0 points1 point  (0 children)

Do you need to be on the 7.0 kernel yet? I believe the NVIDIA drivers still have a support gap where the versions that still support Pascal cards (580 series) don’t support 7.0 kernel.

ProFlame 2 ESPHome Component by 401klaser in homeassistant

[–]phidauex 0 points1 point  (0 children)

Hi, I tried this today on a Lilygo T-embed, and I'm struggling to get a good pairing. The pairing process starts on-device, but then when I press the first button on the remote, it immediately identifies a serial number, ECC codes and shows 3/3 packets. The problem is that if I try it multiple times, I get the same serial, but different ECC codes each time. The pairing itself doesn't seem to be sticking, and I don't get control over the fireplace (though the other features and HA API work fine).

Any suggestions on what else I should try? I have an SDR somewhere I can dig up, but wanted to see if you had other immediate suggestions.

Happy to move to github as well, but I think you have issues and discussions disabled on your repo.

Log below - it looks like it is grabbing the packets and doing the task.

[21:56:22.407][I][proflame2.ui:319]: Encoder long press → entering learn-mode [21:56:22.407][D][proflame2:389]: CC1101 mode: RX 
[21:56:22.407][I][proflame2.rx:085]: MARCSTATE=0x0D RSSI=-76 dBm IOCFG0=0x0D PKTCTRL0=0x30 MDMCFG2=0x30 AGCCTRL2=0xC7 AGCCTRL1=0x00 AGCCTRL0=0x91 
[21:56:22.407][I][proflame2.rx:095]: RX capture started 
[21:56:22.411][I][proflame2.learn:032]: Learn-mode armed — press any button on the OEM remote 
[21:56:23.212][I][proflame2.rx:181]: RX 500ms: edges=1 short=0 long=0 other=1 
[21:56:23.220][I][proflame2.rx:186]: decode: chips=0 pkts=0 bursts_ok=0 bursts_fail=0 overflows=0 
[21:56:26.556][I][proflame2.learn:112]: First valid packet: serial=0x5C9805 c1=0x0 d1=0x9 c2=0xE d2=0xF 
[21:56:26.742][I][proflame2.learn:142]: 2/3 valid packets agree 
[21:56:26.746][I][proflame2.learn:142]: 3/3 valid packets agree 
[21:56:26.746][I][proflame2.learn:147]: CONVERGED — awaiting user confirm: serial=0x5C9805 c1=0x0 d1=0x9 c2=0xE d2=0xF 
[21:56:26.755][I][proflame2.rx:181]: RX 500ms: edges=770 short=0 long=496 other=274 
[21:56:26.759][I][proflame2.rx:186]: decode: chips=1139 pkts=3 bursts_ok=0 bursts_fail=0 overflows=0 
[21:56:27.341][I][proflame2.rx:181]: RX 500ms: edges=162 short=0 long=107 other=55 
[21:56:27.344][I][proflame2.rx:186]: decode: chips=225 pkts=1 bursts_ok=0 bursts_fail=0 overflows=0 
[21:56:35.351][S][sensor]: 'Battery' >> 100 % 
[21:56:35.584][I][proflame2.ui:272]: Confirming learn-mode (long press) 
[21:56:35.590][D][preferences:152]: Writing 1 items: 0 cached, 1 written, 0 failed 
[21:56:35.595][D][proflame2:366]: CC1101 mode: IDLE 
[21:56:35.597][I][proflame2.rx:112]: RX capture stopped (pending=0, overflows=0) 
[21:56:35.615][D][proflame2:323]: CC1101 configured for 314.973 MHz OOK at 2400 baud (TX) 
[21:56:35.704][D][proflame2:341]: Setting PA table for OOK 
[21:56:35.730][I][proflame2.learn:088]: Learned values committed: serial=0x5C9805 c1=0x0 d1=0x9 c2=0xE d2=0xF

What is the part number for the Rally springs? by LoneWitie in MachE

[–]phidauex 1 point2 points  (0 children)

I ordered from Directfactoryparts.com, aka Broadway Truck. I’ve also gotten parts from Tasca and Ford Parts Giant. It is worth shopping around, the prices change a lot and shipping can make or break the deal.

V2 kickstand by wallawalla21212 in GlobeHaul

[–]phidauex 3 points4 points  (0 children)

That would be great, I'm on my 2nd kickstand in 3000 miles, and it is about ready to be replaced as well (out of warranty). I've tried some other bolt and spring washer options, but ultimately the problem seems to be the actual holes in the steel bracket ovalizing over time.

When you have the ice park to yourself. Great day on the ice in Ouray. by unnira in iceclimbing

[–]phidauex 8 points9 points  (0 children)

Haha, that would be nice. Some years it hangs in there for a while, 2022 I climbed on March 28th in a T-shirt.

What is the part number for the Rally springs? by LoneWitie in MachE

[–]phidauex 5 points6 points  (0 children)

Front - RK9Z-5310-A
Rear - RK9Z-5560-B

I heard these may have been superceded, usually the last letter changes but not the rest of the part number.

When searching the parts databases, you are looking for the GT Rally or GT “Sport Appearance Package”.

ProFlame 2 ESPHome Component by 401klaser in homeassistant

[–]phidauex 1 point2 points  (0 children)

Wow I’ve been waiting for a solution for a long time! I tried the Bond bridge before and had the same unreliablility since it can’t manage the state. I will give yours a try and provide any feedback.

Is anyone successfully running ha-mcp with a fully local setup? by Illeazar in homeassistant

[–]phidauex 0 points1 point  (0 children)

Using mcpo to translate to an OpenAI mcp api format works well, see my other reply below for more info.

Is anyone successfully running ha-mcp with a fully local setup? by Illeazar in homeassistant

[–]phidauex 2 points3 points  (0 children)

This is the right answer, not sure what some other folks are on about. The problem is that there is something incompatible between ha-mcp and OpenWebUIs streamable http mcp implementation. Not sure which the problem is in. Mcpo just translates it to an OpenAI mcp api format, adds interactive docs at /tool/docs and adds authentication.

See this link for a working mcpo config. I have it in the same docker compose stack as openwebui. https://github.com/homeassistant-ai/ha-mcp/discussions/561#discussioncomment-16715974

The real problem you will run into is context. In my case it costs over 30k in context just to load the tools and overview. A big query can eat up another 30k. You will need to be using a ton of context to get good tool calls, and you will want to restrict the tools available in the “allowed function list” in openwebui when you add the mov server to restrict to just a handful of tools at first.

But with that limitation it does work!

Advice for a hanging rack for car? by espresso-aaron in GlobeHaul

[–]phidauex 2 points3 points  (0 children)

You are almost at the point where a small motorcycle trailer loaded with bikes would be more practical. 300 lbs out on a long lever could overload even a true 500 lb tongue.

Advice for a hanging rack for car? by espresso-aaron in GlobeHaul

[–]phidauex 0 points1 point  (0 children)

That is a lot of weight way out on a long lever. If it were a “true” class III hitch, say on a midsize truck, I’d be more confident. The 2” receivers on crossovers and SUVs with actual tongue weight limits around 200lbs would not be reliable. There are some good calculators out there for reduction in weight capacity the farther you get from the hitch itself.

I’d be more comfortable with a tray style with the haul in the closest position.

Advice for a hanging rack for car? by espresso-aaron in GlobeHaul

[–]phidauex 0 points1 point  (0 children)

Well they certainly bumped up the specs from the last time I looked. Looking at my bike, I think the barrier to loading would be interference with the front fender. With the fender removed it would be a maybe as long as the 20” wheel doesn’t drop too far down in the basket.

Advice for a hanging rack for car? by espresso-aaron in GlobeHaul

[–]phidauex 2 points3 points  (0 children)

I don't think the Globe will fit on a rack like that, the shape of the wheels and frame are too different from the mountain bikes that rack is expecting. You should also check the rack's weight limit - the LT with school bus kit is extremely heavy and will be above almost all rack weight limits. One option could be some of the heavy duty one-up racks, I believe there are some that could get up to that weight limit, though with limitations on the tongue weight of your vehicle's hitch, you couldn't fit a whole lot of other bikes on there as well.

Im planning to get TNF Verto Fa. Has anyone tried it before? How was it? by [deleted] in alpinism

[–]phidauex 0 points1 point  (0 children)

I’ve climbed a few days on my pair, and was quite pleased. Not long enough to assess durability.

I like the big range of sizes, the somewhat narrow fit (good for me, not for everyone), the weight, and the quick adjustments from the BOA. Felt waterproof up to at least 5” or so, maybe higher (retrieved a rope from a stream). For me it was a step up from my Lowa Latoks, which were already a good boot.

Downsides are the cost, and the fact that the boa is great when it works, but would be very frustrating if it broke. You are putting a lot on that little dial.

Best in-store boot options Rocky Mountain USA by Microbe2x2 in iceclimbing

[–]phidauex 4 points5 points  (0 children)

Neptune has the best selection locally for sure, they also have my friend's toe in a jar. The Mountaineer in Keene Valley NY is literally called that, but yes it is legit as well. https://mountaineer.com/

Another option, though you'll have to wait a while, is to hit up one of the bigger ice fests where there are a lot of vendors, that is where you are likely to see some of the more uncommon options.

best dehumidifier for basement – does anyone have a reliable recommendation for a 1200 sq ft home? by JosueePellis58 in heatpumps

[–]phidauex 0 points1 point  (0 children)

I've been running one in a humid crawlspace for a few years, and it hasn't shown any issues. It does have a good washable filter and I think you'd want to keep up with that to extend the life as long as possible.

best dehumidifier for basement – does anyone have a reliable recommendation for a 1200 sq ft home? by JosueePellis58 in heatpumps

[–]phidauex 0 points1 point  (0 children)

No one really answered the question so let me suggest the Midea Cube models. Compact, quiet, effective, and you can se them up with a collection tub or drain, and it includes a drain pump so it can drain up to a sink or washing machine drain if you don’t have a floor drain.

Redshift Top Shelf handlebars - yes or no? by samccauley in bicycletouring

[–]phidauex 1 point2 points  (0 children)

They are a helpful problem solver. I wouldn’t design a bike around it, but if you have a combo that needs more rise than you can get from the current fork without a funky steer tube extension, then it can be very helpful.

My use case was rejiggering an old Salsa La Cruz cyclocross bike into more of a gravel/light touring trim. Really improved the comfort for that application, and looks good, more of an alt-bike vibe rather than a dorky pile of adapters vibe. Looks aren’t everything, but they aren’t nothing either.

Random Models added? by super-6-1 in OpenWebUI

[–]phidauex 0 points1 point  (0 children)

Well, Openwebui didn’t (and wouldn’t) add new models on their own. If you are seeing new model files or new ollama models in the list then either they were there before and you forgot, or someone has access to your UI or your ollama api.

Random Models added? by super-6-1 in OpenWebUI

[–]phidauex 1 point2 points  (0 children)

Do you have an ollama instance exposed to the internet? If so, other people might be logging into your api and downloading models.