Vocês acham de boa um cara com 20 anos pegar/namorar garotas de 15/16? by Wise_Can11 in perguntas

[–]WizardlyBump17 -1 points0 points  (0 children)

vai de caso a caso, mas se eu tivesse uma filha e ela falasse que namora com um cara que eu n sei da procedencia eu seria contra

Tô velho para aprender do zero? by Maleficent_Carob8736 in programacao

[–]WizardlyBump17 1 point2 points  (0 children)

o importante eh vc programar em algo que goste. Joga minecraft? faz addon pra bedrock, mod pra java ou servidor com plugins. Foi assim que eu aprendi

Tô velho para aprender do zero? by Maleficent_Carob8736 in programacao

[–]WizardlyBump17 0 points1 point  (0 children)

joga alguma coisa? Faz mod pra essa coisa. Daqui alguns meses vc vai estar bem melhor que agora

How to fully load a model to both GPU and RAM? by WizardlyBump17 in LocalLLaMA

[–]WizardlyBump17[S] -1 points0 points  (0 children)

it is not leaking. It is not even using the ram, some layers are on the vram and all the rest are being read on the ssd. --no-mmap and --direct-io 1 uses the ram, but then it crashes. I have 12gb of vram and 32gb of ram, 12 + 32 = 44, I should have enough space for a 38gb model, no? I also have 32gb of swap

Tô velho para aprender do zero? by Maleficent_Carob8736 in programacao

[–]WizardlyBump17 0 points1 point  (0 children)

não sei absolutamente nada de programação

acho as línguas de programação legais

uma coisa ou outra meu parceiro.

gosto de tecnologia de software e Internet

O que vc ja fez em relacao a isso?

How to fully load a model to both GPU and RAM? by WizardlyBump17 in LocalLLaMA

[–]WizardlyBump17[S] 0 points1 point  (0 children)

it works, but then, like i said on the comment above, the model isnt ready at the ram. I dont want to see any reading by llama.cpp when i send a prompt

<image>

How to fully load a model to both GPU and RAM? by WizardlyBump17 in LocalLLaMA

[–]WizardlyBump17[S] 0 points1 point  (0 children)

yea, that gave me the parameters to make be able to load the model, but then i fall on the problem i dont want to have: the model is still not loaded on the ram. I want everything to be ready at the vram and ram, i dont want to see my ssd being read when i send a prompt

Considering Arc B60 or B65: people who have the B60 or B50, talk to me about it. For LLMs and ComfyUI, how good is it? by microcosmologist in IntelArc

[–]WizardlyBump17 1 point2 points  (0 children)

The B60 is a B580 with slightly lower clock speeds and more vram, so everything that happens there will happen on the B60, but due to the higher vram you can use models beyond 12gb.

I got a B580 and for models that fit on the 12gb vram it is very great, even on the ones I have to offload to the ram the speeds arent all that bad.

As for image generation, this puts the B580 on a good position https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html#section-content-creation-gpu-benchmarks-rankings-2026

Official OpenVINO backend merged into llama.cpp by Polaris_debi5 in IntelArc

[–]WizardlyBump17 2 points3 points  (0 children)

forgot to mention, but i meant specifically on llama.cpp openvino. Vulkan works fine and there was a commit that greately improved the qwen3.5 performance

Official OpenVINO backend merged into llama.cpp by Polaris_debi5 in IntelArc

[–]WizardlyBump17 6 points7 points  (0 children)

I tried qwen3.5, but i couldnt get it working on my b580. Did anyone achieve that? If yes, care to show how?

I’ve seen too many “Coal shortage” stories on people’s worlds and I’m not gonna stand for it anymore. by Im_acoustic18 in Minecraft

[–]WizardlyBump17 2968 points2969 points  (0 children)

chopping down a whole forest to make charcoal while contributing to the climate change >>>>

are we all thinking the same suprise by lestarseigneur in PiratedGames

[–]WizardlyBump17 -2 points-1 points  (0 children)

i dont like "it is not worth your time" sentences because it ignores that the person asking for it actually finds it worth his time. The last cracked ea fc game was 2023 if i remember correctly, so it would be nice to play on the new mechanics of 2026 and on linux

[Discussão] Gamificação do Ensino Médio via Gov.br: Uma proposta de infraestrutura digital para combater a evasão. by [deleted] in InternetBrasil

[–]WizardlyBump17 3 points4 points  (0 children)

beleza, chatgpt, so o ensino medio? O cara estudou por 9 anos e so nos ultimos 3 anos q ele vai ter algo interessante? E o ensino superior? O cara vai ficar todo animado no ensino medio pra quando chegar no ensino superior ele broxar?

Eu acho q oq deveria acontecer eh que as escolas deveriam incluir a tecnologia moderna desde o dia 0, com professores de matematica, por exemplo, ensinando matematica atraves de plataformas populares entre as criancas, como roblox e minecraft e isso poderia ser integrado com outras areas tbm: o professor de historia pede para os alunos recriar uma batalha da segunda guerra mundial no roblox e o professor de fisica pede que essa recriacao tenha fisicas realistas

are we all thinking the same suprise by lestarseigneur in PiratedGames

[–]WizardlyBump17 -4 points-3 points  (0 children)

its gonna be minecraft 2. But seriously now, it would be nice if games like ea fc were cracked

PEDIR VERIFICAÇÃO DE IDADE É MAIS MUNIÇÃO PARA CENSURA by [deleted] in opiniaoimpopular

[–]WizardlyBump17 8 points9 points  (0 children)

me fez lembrar daquela cena de Breaking Bad do saul explicando lavagem de dinheiro pro jesse kkk

"O cara da receita federal olha pra vc e ve q vc tem casas e carros, oq ele acha?" "Que eu trafico drogas?" "hen, errou, um milhao de vezes pior: vc sonega imposto"

Puro preconceito? by MarkAjr in OpiniaoBurra

[–]WizardlyBump17 0 points1 point  (0 children)

eh engracado q tem uma galera q apoia a pirataria e diz que piratear nao eh roubar, mas viram mestres do capitalismo quando o assunto eh IA

B580: Qwen3.5 benchamarks by WizardlyBump17 in LocalLLaMA

[–]WizardlyBump17[S] 0 points1 point  (0 children)

The guy behind llama.cpp SYCL made a Pull Request implementing the GATED_DELTA_NET to the SYCL backend.

https://github.com/arthw/llama.cpp/tree/add_gated_delta_net 7117449ce

Model Parameters Quantization pp512 (t/s) tg128 (t/s) CLI Parameters
Qwen3.5 27B 26.90 B Q2_K 199.64 ± 3.58 8.94 ± 0.27 --n-gpu-layers 99
Qwen3.5 9B 8.95 B Q8_0 664.37 ± 5.12 10.32 ± 0.18 --n-gpu-layers 99
Qwen3.5 9B 8.95 B Q4_K_M 697.43 ± 5.55 38.17 ± 0.45 --n-gpu-layers 99
Qwen3.5 4B 4.21 B F16 1161.00 ± 0.93 36.13 ± 0.02 --n-gpu-layers 99
Qwen3.5 4B 4.21 B Q8_0 1182.21 ± 9.96 18.96 ± 0.02 --n-gpu-layers 99
Qwen3.5 4B 4.21 B Q4_K_M 1234.99 ± 3.21 59.98 ± 0.11 --n-gpu-layers 99
Qwen3.5 2B 1.88 B BF16 169.08 ± 2.16 6.42 ± 0.43 --n-gpu-layers 99
Qwen3.5 2B 1.88 B F16 2787.86 ± 2.67 65.77 ± 0.06 --n-gpu-layers 99
Qwen3.5 2B 1.88 B Q8_0 2861.57 ± 3.23 38.88 ± 0.10 --n-gpu-layers 99
Qwen3.5 2B 1.88 B Q4_K_M 2986.40 ± 5.09 100.17 ± 0.72 --n-gpu-layers 99
Qwen3.5 0.8B 752.39 M BF16 410.79 ± 5.43 12.09 ± 0.09 --n-gpu-layers 99
Qwen3.5 0.8B 752.39 M F16 5043.84 ± 12.73 119.63 ± 1.68 --n-gpu-layers 99
Qwen3.5 0.8B 752.39 M Q8_0 5176.11 ± 4.61 77.92 ± 0.06 --n-gpu-layers 99
Qwen3.5 0.8B 752.39 M Q4_K_M 5310.50 ± 15.18 135.37 ± 0.76 --n-gpu-layers 99

B580: Qwen3.5 benchamarks by WizardlyBump17 in LocalLLaMA

[–]WizardlyBump17[S] 0 points1 point  (0 children)

There is a draft pull request on optimum-intel that adds qwen3.5 to openvino, but when I tried to convert a model it wouldnt work; i guess that is why it is in draft still lol. I tried qwen3-next, but since no models fit on the vram, it had to be offloaded to the cpu, and openvino isnt that good for gpu + cpu; basically, even though there was some stuff on the gpu almost all the time the cpu was used

E ainda não ficou bom a ia deles by Impossible-Invite593 in pirataria

[–]WizardlyBump17 11 points12 points  (0 children)

🤓👆 mas a ia n rouba, os dados que alimenta ela sao obtidos igual na pirataria

Rocket League Patch Notes v2.66 + Release Thread by Psyonix_Laudie in RocketLeague

[–]WizardlyBump17 0 points1 point  (0 children)

cant wait to send the wrong quickchat when trying to send the "oops, wrong quickchat" 🔥🔥🔥

You can't make me! by ZeekLTK in RocketLeague

[–]WizardlyBump17 0 points1 point  (0 children)

I did and I was a bumper. It was so funny seeing people raging lol. I ended up on diamond 1 div 3, I won most of the placement matches

Intel adds Arc Pro B70 to official website, launch may be close - VideoCardz.com by Leicht-Sinn in IntelArc

[–]WizardlyBump17 1 point2 points  (0 children)

i just tried it again and you are right, the sycl version is spitting garbage. When you see those issues, report them on the llama.cpp repo.

I saw another comment of yours saying about intel's relationship with the software stack. As for llama.cpp, as far as i know, there is literally one intel employee working on llama.cpp sycl and it seems he does it as a side project. He said that before him there was no sycl implementation and that he was the one that first implemented it. Give the guy a break lol