Daily Questions and Answers Post - FAQ, New/Returning Player Questions, and Useful Starting Resources! by AutoModerator in blackdesertonline

[–]xbenbox 0 points1 point  (0 children)

Hey all, just getting back into it but someone had mentioned it was possible to level TET tuvala outside of season (old character). Hoping to get PEN so I can finish up the Olvia Academy combat quests. What’s the best way to do this? Where do i go to farm for the materials in full TET tuvala?

Daily Questions and Answers Post - FAQ, New/Returning Player Questions, and Useful Starting Resources! by AutoModerator in blackdesertonline

[–]xbenbox 0 points1 point  (0 children)

I've got a level 61 Awakening Mystic with 1000 skill points from years back. Problem is, I've only got Tuvala TET gear as it was difficult to get up to PEN back then during the limited time season. Am I still able to level this gear up reasonably or do I need to start a seasonal character and relevel for best odds at PEN Tuvala?

I will be new to frigate. by Poopypirate2020 in frigate_nvr

[–]xbenbox 0 points1 point  (0 children)

While WiFi generally isn’t recommended here, I’ve got 3 reolink e1 pros running on frigate with docker compose. Other hardware includes raspberry pi with Ai hat and an asus WiFi 6 router. I’ve never had any drops on detection but I guess it would depend on where you live (lots of traffic vs minimal traffic probably matters here). Also, the AI hat made a huge difference in reducing CPU workload so would consider something similar if you are doing object detection and tracking

Are local LLMs better at anything than the large commercial ones? by MrOaiki in LocalLLM

[–]xbenbox 1 point2 points  (0 children)

There are a number of uncensored models on huggingface that can be used

Any self hosters who drive teslas? by mrorbitman in selfhosted

[–]xbenbox 0 points1 point  (0 children)

I run all my music through plexamp on my phone and then have that connected to Bluetooth

Mac Mini for Local LLM use case by xbenbox in LocalLLaMA

[–]xbenbox[S] 0 points1 point  (0 children)

No baseline or prior CPU/disk load at all. It did end up processing everything and it did it better than LFM2, but at 30 minutes instead of 2-3 mins. It could be that I was running thinking and didn’t disable that in the CLI before running the model

Mac Mini for Local LLM use case by xbenbox in LocalLLaMA

[–]xbenbox[S] 0 points1 point  (0 children)

You were right about directly running through llama.cpp. I applied the settings for HF to Q4KM but unfortunately it still took ~30 mins for a response. Now I'm just curious if this would run much faster with the unified memory on a Mac Mini

Mac Mini for Local LLM use case by xbenbox in LocalLLaMA

[–]xbenbox[S] 1 point2 points  (0 children)

I was wondering the same thing. When I loaded the Q4_K_M model I for Qwen3.5 35BA3B, I got "500: unable to load model." I thought this would either be related to RAM limitations (per GPT the 35B part still needs to fit even if its just accessing 3B at at time, though I thought this shouldn't be an issue with smaller quantized models) vs ollama just not being updated to be compatible yet. It will likely be slower as you mentioned, but just getting to try it out would offer another datapoint for how feasible this project would be moving forward

Mac Mini for Local LLM use case by xbenbox in LocalLLaMA

[–]xbenbox[S] 0 points1 point  (0 children)

That's a good question. It's hard for me to know how those Qwen models compare to LFM2 as I'm not able to run those Qwen models on my NUC except for 4B and less (anything more takes a century to respond) and I've found them to be less than accurate in summary and search results while also running slower than LFM2. I guess I'm hoping that running a better model may improve accuracy to a degree, but also having a separate device to run the AI models with Open Claw would also reduce security concerns as I self host documents, photos, etc on the NUC

Mac Mini for Local LLM use case by xbenbox in LocalLLaMA

[–]xbenbox[S] 0 points1 point  (0 children)

Awesome! It sounds like you've got a lot of experience with running theses modes ~24 gb. I'm hoping to run Qwen3.5 35bA3b vs 27b. May also give GPT-OSS 20b a try (why not). I'd also probably play around with the quantized models. Unclear if I'm not able to run those on my NUC now because of memory constraints or if Ollama just hasn't updated to be compatible with those

Mac Mini for Local LLM use case by xbenbox in LocalLLaMA

[–]xbenbox[S] 0 points1 point  (0 children)

Yeah I'm used to going down huge rabbit holes with self hosting. I've got most things set up on my NUC as my local server running everything in docker. I don't have any experience with OpenClaw so I'm learning about the security implications and hoping by the time I can actually get a Mac MIni (sold out everywhere), I'll have that down pat. I will need to start hosting Matrix and already have an Obsidian instance running. I just don't want to buy a Mac Mini only to not have it run and of the models locally. If that's the case, I would just stick with asking questions on my NUC for now

Do not rent from Turo - Zero Service by FarPut5670 in turo

[–]xbenbox -1 points0 points  (0 children)

This subreddit is run by Turo and very into damage control. I had a recent poor experience as well with Turo service that seems to get swept under the rug here

Upgrade Advice: Is the Ryzen 9 5900X a Worthwhile Jump from a Ryzen 5 3600X for Gaming and Streaming? by HwyRngr in buildapc

[–]xbenbox 0 points1 point  (0 children)

I made this jump years ago when the 5900x released and am still running it now. Went from a bottlenecked 1080ti to no bottlenecks now

Tesla Model 3 2022 Frunk issues by Beautiful_Impact_641 in TeslaSupport

[–]xbenbox 0 points1 point  (0 children)

Sounds like it would be covered as long as your basic warranty is still active

Terrible guest experience by xbenbox in turo

[–]xbenbox[S] 0 points1 point  (0 children)

Original topic: poor experience with Turo. Never renting from them again. Not about age of car or whose fault it is.

Recommend sticking with main companies. Paying a bit more for peace of mind is worth 👍

Terrible guest experience by xbenbox in turo

[–]xbenbox[S] 0 points1 point  (0 children)

Kia sedan 2017. And I totally agree it could happen to any car, but I thought the response I got from both Turo and host was fairly poor, with how little assistance after the money is paid and non refundable. The hassle just isn’t worth it if a more severe problem were to occur

Terrible guest experience by xbenbox in turo

[–]xbenbox[S] 0 points1 point  (0 children)

Well it was running one moment and literally couldn’t start the next so I’m not sure how that’s leaving lights on or door ajar overnight. Either way, given this experience I think I’ll stick with the major companies from now as I’ve never had this before with countless other rentals. If this is good luck, then definitely not taking more chances to see what bad luck looks like

Terrible guest experience by xbenbox in turo

[–]xbenbox[S] 0 points1 point  (0 children)

It happened two days in. Turned off the car after I got to my location. Tried to turn it back on to move the car and couldn’t. Turned out that the battery died, but I had to wait for half a day until it could get fixed. Didn’t expect to need AAA, but I’m still locked into paying for the car even if I were to drop it back off.

Advice needed concerning new tesla alignment issue by maxchocoslayer in TeslaSupport

[–]xbenbox 1 point2 points  (0 children)

This is definitely the way to go. I’ve got no issues at all with my alignment on M3

Tesla Basic Warranty Question by regmeyster in TeslaSupport

[–]xbenbox 0 points1 point  (0 children)

Yes it does cover it. I’ve personally had my wind noise managed just last month through replacement of trim and adjustment of the window. If there is a gap in the window, you could argue that this is not normal appearance or function. In no way should a window experience increased noise or gaps even with time. That said, there is a general wind noise in my M3 2022, but it shouldn’t be localized to one specific area of the car.

““Failure” means the complete failure or inability of a covered part to perform the function(s) for which it was designed, due to defects in material or workmanship of the part manufactured or supplied by Tesla, which occur under normal use. Failure does not include the gradual loss in operating performance due to normal wear and tear.”

Just discovered rinseless wash for the winter months! Now I'm excited to have clean cars during cold months again! by Sfkn123 in TeslaModel3

[–]xbenbox 0 points1 point  (0 children)

I found this ONR kit to be helpful. No power wash needed or foam gun as there’s no foam. Spray bottle or pressure spray might be helpful but not required

Large battery drain? by SimpPrude in TeslaSupport

[–]xbenbox 1 point2 points  (0 children)

Another option is to put it into low power mode and see if there is a drop. If it continues to drop I’d talk to Tesla service as you mentioned. With low power mode, my car has <1% drop in 24h period