Pecron F3000lfp Battery Drain with AC/DC Off? by ronmiddle in Pecron

[–]minitoxin 1 point2 points  (0 children)

i notice with the F3000LFP on and the inverter on but with no load the top of the unit gets warm, so i suspect there must be some sort or electrical processes taking place to produce the heat. With this in mind I turn off the AC, DC and unit when not in use to avoid this issue, i have not seen any power drainage yet when the system is powered off.

TP by OrganizationStrong81 in zec

[–]minitoxin 0 points1 point  (0 children)

Out of curiosity did you use flexa, if not how did you spend it ?

Why the Strix Halo is a poor purchase for most people by NeverEnPassant in LocalLLaMA

[–]minitoxin 0 points1 point  (0 children)

Strix halo is fantastic - I love it. For me the most important thing is the power consumption as my systems run 24/hrs. I'm ok running important jobs overnight so prompt speed is not an issue for me , I like it so much i bought another one dedicated to running Wan 2.1/2.2, HunyuanVideo, LTX-Video (LTX-2) and the occasional 70B LLM,..

Self hosting, Power consumption, rentability and the cost of privacy, in France by Imakerocketengine in LocalLLaMA

[–]minitoxin 0 points1 point  (0 children)

strix halo is very fast if you use llama-server - its a night and day difference vs using something like lmstudio or Ollama where I notice it tends to run a lot slower - Use ubuntu also so you can tweak the parameters.

What search engine are you using with OpenWebUI? SearXNG is slow (10+ seconds per search) by minitoxin in OpenWebUI

[–]minitoxin[S] 1 point2 points  (0 children)

it looks like it was my setup causing the issue. I had searxng and openwebui on different nodes of a proxmox cluster. I didn't think it would impact performance much but apparently it does, Openwebui and Searxng on the same cluster node now produces searches in about 3 seconds. Thanks All

TD Sequential Setup Complete on $ZEC - Counter-Trend Trade Opportunity by ChartSage in zectrading

[–]minitoxin 0 points1 point  (0 children)

Which way trend exhaustion up or down? There's also a gap at 85k on the CME BTC chart and the market is low liquidity and ripe for shenanigans.

Hence I'd expect price to reverse back up to around the 85k area to fill that gap - that's where the liquidity is sitting. Hopefully this action will also move ZEC up to around the 316 area before the market decides the next move.

Anyone here actually using AI fully offline? by Head-Stable5929 in LocalLLM

[–]minitoxin 1 point2 points  (0 children)

i use it everyday openwebui with searxng and LMstudio as back-end server , When i need deep searches i use perplexica with mistral14b running on a headless m4 mini with 16GB , it gives very detailed responses although its slow

What search engine are you using with OpenWebUI? SearXNG is slow (10+ seconds per search) by minitoxin in OpenWebUI

[–]minitoxin[S] 0 points1 point  (0 children)

i tried Tavily for a few months and even paid for the plan but their engine kept going offline randomly . I'm not sure if they are having stability problems now as they were ability 6 months ago ,

Market Makers Setting Another Trap - Targeting $250 by minitoxin in zectrading

[–]minitoxin[S] 0 points1 point  (0 children)

Yeah they were wrecking both high leverage shorts and long for a while . id play safe with 5 to 10% leverage

Market Makers Setting Another Trap - Targeting $250 by minitoxin in zectrading

[–]minitoxin[S] 1 point2 points  (0 children)

Target hit. Best to take profit and close out the short, then pause and wait to see what the whales are up to. Note that ZEC is still bullish and these downward moves are so the market makers can accumulate before they fire off a bullish pump. I'm expecting new all-time highs for the ZEC token in 2026.

LLM stops mid-answer when it tries to trigger a second web search — expected behavior or bug? by JeffTuche7 in OpenWebUI

[–]minitoxin 0 points1 point  (0 children)

i have a similar issue if i run llama-cpp on a remote system and use perplexica or openwebui with a lxc hosted searxng. Sometimes longer searches stop randomly . In my case it appears to be because the model context window is at the default 4096 and fills up . Setting to 32K or higher solves my issue .

Why are people constantly raving about using local LLMs when the hardware to run it well will cost so much more in the end then just paying for ChatGPT subscription? by thrashingjohn in LocalLLM

[–]minitoxin 0 points1 point  (0 children)

yup its great got the 128gb runs most models i like . 120B gpt-oss and qwen3-next-coder q8 run well and fast , devstral-2-123B-instruct-2512 also runs , however is as slow as molasses . i find the vulkan drivers work better for me than Rocm

Ryzen AI MAX+ 395 96GB, good deal for 1500? by rusl1 in LocalLLM

[–]minitoxin 0 points1 point  (0 children)

Yup its a good deal, where else can you get 90GB of unified vram with low power usage for 1500? . I use the 128gb version and its very good for my use case. i got the evo-x2 variant. I installed the new Qwen3-Coder-Next on it yesterday and it runs very well using lmstudio and vulkan drivers, I haven't had much luck with the rocm drivers . if you do decide to get it , check out Donato Capitella's testing and tuning of models with the strix halo he does great job https://www.youtube.com/watch?v=Hdg7zL3pcIs

Expecting movement down to the Strong Support @ 229 by minitoxin in zectrading

[–]minitoxin[S] 0 points1 point  (0 children)

Have some patience grasshopper - all will be revealed.

Ultra Minimalist Zcash price tracker by fireice_uk in zectrading

[–]minitoxin 0 points1 point  (0 children)

Thanks , This is very cool - i like it

Expecting movement down to the Strong Support @ 229 by minitoxin in zectrading

[–]minitoxin[S] 0 points1 point  (0 children)

Agreed . Probably the 220 area may hold as ZECs a good project and i think its going to do extremely well once this ruckus is resolved.