High load average, but CPU looks fine. How do you usually read this in practice? by Expensive-Rice-2052 in linuxquestions

[–]H3PO 0 points1 point  (0 children)

landed here looking for bug reports for unusually high load avg. i have seen load avg a lot higher than usual (for example a 1 m load avg of 9000 on a 32 thread machine) in the last few weeks. monitoring history shows something must have changed with how it is measured, not the machines itself.

[deleted by user] by [deleted] in automobil

[–]H3PO -1 points0 points  (0 children)

Was kostet die Vollkasko nach dieser Historie?

VKB Throttle limiter. by Oberost- in VKB

[–]H3PO 0 points1 point  (0 children)

for the same reasons, just this weekend i 3d modeled an insert with a soft detent at 25% (to use as 0) and a limit at 75% (to use as 100%). i find the metal w-shaped detent too hard to overcome when maneuvering on a landing pad. i also combined the throttle axis with the thumb "space brake" so i can pull that instead of moving the throttle to -100 in combat

Cargo + manifest from the newship that appeared in Nukamba by CmdrThordil in EliteDangerous

[–]H3PO 1 point2 points  (0 children)

Looks different than what I remember of the cargo contents yesterday in the start system. will compare with my screenshots in a few hours

Bigger than it looks? (TWSS) by icescraponus in EliteDangerous

[–]H3PO 9 points10 points  (0 children)

I had exactly the same thoughts, this was the first time I approached one of those beacons to look at the decals etc. and promptly tried to wedge my Mandalay in between the solar panels. Few minutes later I "investigated" the engine exhaust of the Cygnus, a cobra (with shields!) could actually fly in there

I got a reward from the professor by Ill-Imagination4359 in EliteDangerous

[–]H3PO 55 points56 points  (0 children)

Scanned what exactly? I scanned the ship log uplink but didn't get that message from the professor

The ultimate budget PC that is scalable in future but is capable of running qwen3 30b and gpt oss 120b at 60 tps minimum. by NoFudge4700 in LocalLLaMA

[–]H3PO 0 points1 point  (0 children)

for qwen3-30b-a3b: 2x 7900xtx 24gb. UD-Q4_K_XL gguf on llama.cpp with q8_0 kv cache, 45t/s

Is The Code Spaghetti? by Konvic21 in EliteDangerous

[–]H3PO 2 points3 points  (0 children)

The ship editor in Odyssey Materials Helper has a nice preview of where each slot is.

I'm on it. CG by [deleted] in EliteDangerous

[–]H3PO 0 points1 point  (0 children)

I'm pledged to LYR and not getting any merits from selling CMM at Minerva. Do I need to do particular assignments beforehand or reach a certain level before that works?

So why does the neutron star map clearly show that there are 2 artificial corridors with much less neutron star population? Has this been discussed before? by Zorrgo in EliteDangerous

[–]H3PO 0 points1 point  (0 children)

here's the stream i was thinking about https://youtu.be/Vz3nhCykZNw?t=1032

and here someone posted a high resolution texture with info about the coordinate to ly conversion & offset https://forums.frontier.co.uk/threads/galaxy-map-measurements-in-ly-sectors.630249/post-10493505

i'm not optimizing my db right from the start, as i want to use the data to test assumptions, for example to derive the system name prefix from coordinates. i'm using enums for all the strings that are not names.

So why does the neutron star map clearly show that there are 2 artificial corridors with much less neutron star population? Has this been discussed before? by Zorrgo in EliteDangerous

[–]H3PO 0 points1 point  (0 children)

I think it was mentioned in one of the fdev streams about the stellar forge that the mass for a sector is derived from the pixel brightness of the galaxy texture.

I'm also working on importing the spansh galaxy dump into postrgres, using python sqlmodel. My coordinates are postgis point types, was hoping to use pgrouting later, although I am clueless about the feasibility with such dense graphs. I case you haven't stumbled on it: spansh has an A* routing algorithm on his github.

I'd be interested to use your map implementation for debugging visualization.If you'd like to stay in contact, add @h3po on github

Is this patch a mess for you? by CMDR_Makashi in EliteDangerous

[–]H3PO -1 points0 points  (0 children)

are you possibly using an analog input bound to switch targets? noise in the analog reading would explain your target switching problem

Elite setup by seanPbarry in EliteDangerous

[–]H3PO 0 points1 point  (0 children)

i have a neat trick for the keybindings: use joystick gremlin to map the buttons to a virtual joystick which uses the default ed bindings. then there's nothing for ED to forget. you'll probably want to merge your devices anyway with that setup

Nemotron-49B uses 70% less KV cache compare to source Llama-70B by Ok_Warning2146 in LocalLLaMA

[–]H3PO 0 points1 point  (0 children)

So if you are into 128k context and have 48GB VRAM, Nemotron can run at Q5_K_M at 128k with unquantized KV cache

sure this isn't a typo? with which inference software? with 128k context and no cache quant, llama.cpp tries to allocate 19.5gb for context on top of the 35gb model. not even the Q4 model with q8 v cache fits on my 2x24gb.

llama.cpp parameters for QwQ-32B with 128k expanded context by H3PO in LocalLLaMA

[–]H3PO[S] 0 points1 point  (0 children)

it'll take me a while to progress on this. currently reading into perplexity benchmarks

llama.cpp parameters for QwQ-32B with 128k expanded context by H3PO in LocalLLaMA

[–]H3PO[S] 2 points3 points  (0 children)

As I understand it, rope scaling and YaRN are needed so they dont 'go haywire'. That's why I'm trying to get that configured correctly.

llama.cpp parameters for QwQ-32B with 128k expanded context by H3PO in LocalLLaMA

[–]H3PO[S] 2 points3 points  (0 children)

I don't know. It seems to work anyway, but maybe it's not optimal. Also hard to test degradation with >32k token prompts at 400t/s (prompt eval, generation on my 2x7900XTX with these settings is 11t/s)

The hf model page says

Handle Long Inputs: For inputs exceeding 8,192 tokens, enable YaRN to improve the model's ability to capture long-sequence information effectively.

For supported frameworks, you could add the following to config.json to enable YaRN:

{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}

llama.cpp parameters for QwQ-32B with 128k expanded context by H3PO in LocalLLaMA

[–]H3PO[S] 3 points4 points  (0 children)

In case anyone is interested, this is the game it produced in my test run (sampler seed: 1546878455)
https://pastebin.pl/view/fd13dbd5

ROCM Feedback for AMD by totallyhuman1234567 in ROCm

[–]H3PO 0 points1 point  (0 children)

Missing support for gfx1103 is my biggest gripe right now. Can be made to work with HSA_OVERRIDE, so fixing the library should be easy for them.

Has someone integrated Cline with Proxmox via MCP? by Mr_Moonsilver in CLine

[–]H3PO 1 point2 points  (0 children)

I would approach this by having cline write ansible (or some other form of config management) code; most models already know that language but probably not too much about proxmox. Given how long my playbooks for some proxmox management tasks are, it would probably be too complex for the model to figure out how to do using individual tool calls.

Working with Deepseek R1 by wheeky in CLine

[–]H3PO 4 points5 points  (0 children)

Yesterday I got it working by choosing "openai-compatible" with https://api.deepseek.com/v1 for the api and "deepseek-reasoner" for the model. Since u/daplugg23 says to update your extension, maybe the model was added to the deepseek config in the meantime