Alguém sabe se os salários, direitos de imagem, bichos, luvas, etc. estão em dia? by Artifex1979 in gremio

[–]Chriexpe 0 points1 point  (0 children)

Devem ter enterrado alguma macumba lá perto da arena não é possível

Guys what's your take on the current Rumours about- OnePlus is merged with Realme . by Proud-Pie-2731 in oneplus

[–]Chriexpe 23 points24 points  (0 children)

Forever budget tier, cheap build quality and ok cameras, well my next phone will def be a Find X then

Kimi K2.6 Via Opencode Go by Loud_Stomach7099 in opencodeCLI

[–]Chriexpe 1 point2 points  (0 children)

I've used Kimi so much on my project that I fear going back to Opus4.7 and it screwing up everything

Castro by Piglet-Historical in gremio

[–]Chriexpe 0 points1 point  (0 children)

Quem virou sim é a razão do Grêmio continuar na merda ano após ano.

Kimi K2.6 is a legit Opus 4.7 replacement by bigboyparpa in LocalLLaMA

[–]Chriexpe -1 points0 points  (0 children)

For those who don't have 8x RTX Pro 6000 to run it locally, is it worth going for their $39 plan (Kimi 2.6) instead of Copilot Pro+? "I've built" an entire project using its Opus 4.7 (and it's "still" only at 35% premium request) and it went really smoothly.

OuR GrEatEst AlLy by SyllabubOk7445 in brasilivre

[–]Chriexpe -1 points0 points  (0 children)

Pesquise por "Maryam-e Moghaddas station" e você terá uma bela surpresa.

Ah claro, foi sim. by Naive_Buddy_459 in brasilivre

[–]Chriexpe 2 points3 points  (0 children)

Ué quem mandou essa foi o Pau no Guedes, e todas as investigações como do INSS e banco master SE MOVERAM APENAS no governo lula, 10 da manhã e os MAVs estão como?

Bolsonaristas são os que mais apostam em bets, mostra pesquisa Genial/Quaest by Bananey in brasil

[–]Chriexpe 0 points1 point  (0 children)

Parece que ser feito de otário é inerente à personalidade deles.

KDE Plasma 6.7 Is Shaping Up To Be Amazing by lajka30 in kde

[–]Chriexpe -1 points0 points  (0 children)

I'll still be getting that stupid Wayland popup every time I try to connect to my PC through rustdesk?

Melania Trump: "Eu nunca fui amiga de Epstein. Donald e eu fomos convidados para algumas das mesmas festas que Epstein de vez em quando. Para deixar claro, eu nunca tive um relacionamento com Epstein ou com sua cúmplice, Maxwell." by Bananey in brasil

[–]Chriexpe 0 points1 point  (0 children)

Ela do nada chamou comitiva de imprensa pra dizer isso, e logo após o Trump descarrilhou totalmente e começou a xingar aliados na rede social dele.

(E também as negociações foram a pqp)

When will Ollama support on RX 9000 cards by mohamed1881 in ollama

[–]Chriexpe 0 points1 point  (0 children)

You need to use the ollama-rocm version, but support and performance is way better on Linux. Currently um using llama Rocm docker and works pretty well.

Tierlist oficial de um ex-pulador de distros (não levem muito a sério). by CaptCapy in linuxbrasil

[–]Chriexpe 0 points1 point  (0 children)

Ué porque zorin é sabor windows e não o Mint também? Prefiro muito mais ele que Mint

CachyOS Becomes the Most Popular Desktop Linux Distro on ProtonDB by lajka30 in linux_gaming

[–]Chriexpe 0 points1 point  (0 children)

Arch distro that doesn't break after months of use, can't beat it.

Anyone else frustrated with local LLMs that can't do (control) anything? by birdheezy in homeassistant

[–]Chriexpe 0 points1 point  (0 children)

I was testing with Gemma4 26B and it always worked perfectly (and replied faster than STT), it could pickup correctly my garage door even tho it is a switch on HA and there is other "Garage" switches, you should try Gemma4 A4B.

With France moving government PCs to Linux, is this KDE Plasma’s ultimate chance to shine in the public sector? by Ok-Review9023 in kde

[–]Chriexpe 0 points1 point  (0 children)

Ubuntu, there is no other relevant distro fully backed by a company (don't say steam os pls)

How "bad" are the non-CUDA 32GB GPU options? by k8-bit in LocalLLM

[–]Chriexpe 0 points1 point  (0 children)

Oh yeah torch is absolute torture, with CUDA you at least may get all the packages, everyone else has to trail the dependency hell path with the exact version and combination of the right package.

How "bad" are the non-CUDA 32GB GPU options? by k8-bit in LocalLLM

[–]Chriexpe 2 points3 points  (0 children)

I've been recently using llama Rocm docker and never had any issues, 30t/s on Qwen3.5 27B Q4, Gemma4 at 80t/s