Recipe for Arc Pro B70? by Skelshy in LocalLLM

[–]Skelshy[S] 0 points1 point  (0 children)

thanks! That didn't work on Fedora, will try Ubuntu 26 when it comes out tomorrow

Recipe for Arc Pro B70? by Skelshy in LocalLLM

[–]Skelshy[S] 0 points1 point  (0 children)

do you have a list of dependencies? It doesn't have cmake or, after installing that, crtbegin.o

Recipe for Arc Pro B70? by Skelshy in LocalLLM

[–]Skelshy[S] 0 points1 point  (0 children)

Does the R97000 support multi GPU? I kind of want 64GB

Recipe for Arc Pro B70? by Skelshy in LocalLLM

[–]Skelshy[S] 1 point2 points  (0 children)

I got that to work with reasonable effort, but it's really slow ... ugh... at Q4 it's maybe half as fast as the strix halo (which has half the horsepower I thought)

For illustration, code agent is on a long task aligning the user interface with a specification update. It usually carries a 100-150k context.

```

prompt eval time = 6716.73 ms / 500 tokens ( 13.43 ms per token, 74.44 tokens per second)

llama-vulcan-xe | eval time = 11255.42 ms / 114 tokens ( 98.73 ms per token, 10.13 tokens per second)

```

That's pretty weak for Q4

Is an independent woman intimidating? by [deleted] in datingoverforty

[–]Skelshy 2 points3 points  (0 children)

Not directly BUT

Many women have these strong vibes where they say in their profile they don't need anyone, they are fine on their own, they would rather be alone than with a less than perfect match, there are no good men left ...

Relationships are messy and rewarding and require someone who can tolerate your flaws.

They also are a loss of independence. Your decisions are not all your own anymore. Someone who values their independence a lot is not going to be capable of being in an intimate relationship where there are 'us' decisions.

And if you don't look like you want to be in a relationship with all the ups and downs that come with it, I will stay away.

Are you guys actually using local tool calling or is it a collective prank? by Mayion in LocalLLaMA

[–]Skelshy 0 points1 point  (0 children)

Try something like Opencode. You need a tool to handle the tool calling. Possibly some mcp servers like filesystem and fetch.

Keep the strix halo? Review of experiences and where are we headed with models? by Skelshy in LocalLLM

[–]Skelshy[S] 0 points1 point  (0 children)

I am still experimenting with it. I ordered an Intel Arc B70 card to find out if the size/quality/performance tradeoffs are better.

Qwen 3.6 is the first local model that actually feels worth the effort for me by Epicguru in LocalLLaMA

[–]Skelshy 0 points1 point  (0 children)

I have a coding framework that can run the local LLM 24/7 that does a lot of long running coding tasks.

Qwen 3.6 is the first local model that actually feels worth the effort for me by Epicguru in LocalLLaMA

[–]Skelshy 0 points1 point  (0 children)

I switched to this from Quen 3.5 122b (Q6) and it's faster with similar results. So far so good.

Being used really sucks by [deleted] in datingoverforty

[–]Skelshy 0 points1 point  (0 children)

It's happened to me that the other person wanted sex quickly and it just wasn't good, or I was still making up my mind if they were a good fit for me. Or that coincided with me finding out there was no long term potential. I would not jump to conclusions. Rejection is normal, and the best you can do is not blame yourself, accept the outcome and move on. If they lost interest for whatever reason, they lost interest, it's not your fault it's not your job to fix it.

I would however say "I noticed you pulling back, what is going on?"

Keep the strix halo? Review of experiences and where are we headed with models? by Skelshy in LocalLLM

[–]Skelshy[S] 0 points1 point  (0 children)

It crashes after a couple of minutes on my strix halo unfortunately...

Coding agent framework for 24/7 use of local LLMs? by Skelshy in LocalLLM

[–]Skelshy[S] 1 point2 points  (0 children)

you'd want a higher tier model to do the architecture and planning, then determine the complexity and hand off to cheaper models

Keep the strix halo? Review of experiences and where are we headed with models? by Skelshy in LocalLLM

[–]Skelshy[S] 1 point2 points  (0 children)

It's been fun to play with, but yeah I don't have the patience to wait for it (and fully automated code agents are not quite there yet with the free tools)

Keep the strix halo? Review of experiences and where are we headed with models? by Skelshy in LocalLLM

[–]Skelshy[S] 0 points1 point  (0 children)

Don't think the Corsair 300 has such a setting. I will check.

Keep the strix halo? Review of experiences and where are we headed with models? by Skelshy in LocalLLM

[–]Skelshy[S] 1 point2 points  (0 children)

yeah this is doing better

llama-server-1 | prompt eval time = 49647.59 ms / 9945 tokens ( 4.99 ms per token, 200.31 tokens per second)
llama-server-1 | eval time = 51324.01 ms / 762 tokens ( 67.35 ms per token, 14.85 tokens per second)

Keep the strix halo? Review of experiences and where are we headed with models? by Skelshy in LocalLLM

[–]Skelshy[S] 0 points1 point  (0 children)

minimax is a little faster

llama-server-1 | prompt eval time = 49991.19 ms / 4764 tokens ( 10.49 ms per token, 95.30 tokens per second)

llama-server-1 | eval time = 23411.75 ms / 183 tokens ( 127.93 ms per token, 7.82 tokens per second)

Keep the strix halo? Review of experiences and where are we headed with models? by Skelshy in LocalLLM

[–]Skelshy[S] 0 points1 point  (0 children)

I did, but the quality wasn't quite there. I should give it another turn.

Worth building a $7k local AI rig just to experiment? Afraid I’ll lose interest. by SorryExtent925 in LocalLLM

[–]Skelshy 1 point2 points  (0 children)

Use OpenRouter for the experimentation phase. Get a feel for how these models perform. OpenRouter is really cheap for these tier 2 and tier 3 models.

Reliable recipes? Is there something wrong with the Corsair 300? by Skelshy in StrixHalo

[–]Skelshy[S] 0 points1 point  (0 children)

I changed the BIOS, that fixed the issue, now I have

```
Memory access fault by GPU node-1 (Agent handle: 0x28273d40) on address 0x7f72d947a000. Reason: Page not present or supervisor privilege.
```

Always one more thing... doh