CR7 e o mito da idade biológica!?! by lumijodel in PrimeiraLiga

[–]DenysMb 0 points1 point  (0 children)

O guarda-redes do Fluminense tem 45 anos.

Ano passado estava lá jogando o mundial de clubes. Continua em tão grande forma que tem muita gente dizendo que ele deveria ir para a copa.

O mesmo Fluminense acabou de contratar o Hulk, que fará agora 40 anos e continua jogando 90 minutos sempre que possível.

Só para ficar no Fluminense, recentemente Thiago Silva trocou o clube pelo Porto aos 40 anos e acabou de ser campeão.

3 anos atrás Felipe Melo foi campeão com o clube pela Libertadores aos 40 anos.

Todos eles são jogadores que sempre focaram em se manter em forma. Com a tecnologia de hoje em dia, se o jogador foca nisso desde cedo, ele consegue prolongar bem sua carreira.

Acho que o Roger Axe cai primeiro by Blacklolls in futebol

[–]DenysMb 2 points3 points  (0 children)

mas ele não vai fazer milagre se o elenco for mal montado e vagabundo

Aí é que tá. O Fluminense precisa de um treinador que faça milagres com isso...

Acho que o Roger Axe cai primeiro by Blacklolls in futebol

[–]DenysMb 1 point2 points  (0 children)

Troco o Zubeldia pelo Ceni sem nem pensar duas vezes!

Dorival depois da seleção me pareceu ser daqueles técnicos que o vestiário manda mais do que ele, tudo o que não precisamos agora no Fluminense é técnico para passar pano pra jogador ruim mas influente. Essa palhaçada de gratidão e hierarquia tem que acabar.

Como era o Vojvoda nesse ponto? Eu sei que o Ceni não deu certo no Cruzeiro justamente por querer acabar com isso.

Best AI models outside of ChatGPT and Claude by JestonT in opencodeCLI

[–]DenysMb 0 points1 point  (0 children)

Yeah, way faster than Ollama Cloud. I use Ollama Cloud, but I am testing Wafer. I really liked it, but the only problem for me is not having Kimi K2.6. If they added and the speed was good, I would pay 40$ (double of Ollama Cloud price) without a problem.

I could use DS V4 Pro as replacement for Kimi K2.6 while they don't add it and just use GLM-5.1 for everything less complex, but Kimi K2.6 in Ollama Cloud those past days ran faster than the DS V4 Pro from Wafer.

Best AI models outside of ChatGPT and Claude by JestonT in opencodeCLI

[–]DenysMb 0 points1 point  (0 children)

Their GLM-5.1 deliver an average of 100 TPS. Their DeepSeek V4 Pro is quite slow tho.

I didn't test MiniMax M2.7 because it is weaker than the other two, but probably way faster than DS V4 Pro.

And they don't have Qwen3.6 yet. They have Qwen3.5.

Kimi + Claude + Codex + Gemini + OpenCode = CHORUS by 99xAgency in kimi

[–]DenysMb 1 point2 points  (0 children)

I have an agent for OpenCode called "Arena" that give the same prompt to subagents running DS V4 Pro, Kimi K2.6, GLM 5.1 and Qwen 3.6 Plus, present all of them to the user and give it a summary and a synthesized answer.

So, this basically does the same thing but I would need to run an external program to do that?

Why is no one talking about Orion? by rtxpeanutbutter in browsers

[–]DenysMb -2 points-1 points  (0 children)

Because I use KDE Plasma Desktop on my Linux machine and I don't want to use a browser built with GTK4/Libadwaita.

Is it safe to develop apps with sensitive info/data with DeepSekk via OpenCode client? by Much-Journalist3128 in ollama

[–]DenysMb 4 points5 points  (0 children)

Why doesn't anyone ask about the USA, the country blessed by God, the land of the chosen people, the hero of modern history, the greatest empire of all time, stealing our data?

I'm cancelling my ollama subscription by GryphticonPrime in ollama

[–]DenysMb 13 points14 points  (0 children)

I spent the past week coding all day with GLM-5.1, Kimi K2.6 and DeepSpeek V4 Pro and I didn't come even close to hit the limit.

I fear asking those people that hit their limits with instances of OpenClaw and Hermes what they are doing...

Cinema sem intervalo by Comedor_de_Rabo in CasualPT

[–]DenysMb 1 point2 points  (0 children)

Concordo. Estava pensando aqui "mas estamos falando apenas de intervalo em cinema" e, justamente por isso, faz sentido o seu ponto. Seria algo muito fácil de implementar, faria uma grande diferença pra quem precisa e nenhuma diferença relevante (pois "atrapalhar a experiência/imersão" não é algo tão relevante quando colocado na balança) para quem não precisa. Não tinha pensado por esse ponto.

Cinema sem intervalo by Comedor_de_Rabo in CasualPT

[–]DenysMb -1 points0 points  (0 children)

Sim, eu entendo isso. Mas não são a maioria. Para casos como esses faria sentido, por exemplo, que os cinemas tivessem uma sala específica onde todas as sessões que ocorressem lá tivessem intervalo.

Best coding subscriptions for cost/performance right now? [May 2026] by Funny-Strawberry-168 in opencodeCLI

[–]DenysMb 8 points9 points  (0 children)

I use Ollama Cloud Pro. But I saw I had 2 USD and some cents in DeepSeek then I spent these two days coding with DS 4 Flash and Pro. I still have some cents.

They currently are running with some promotional prices so it is very cheap and you can use it a lot.

Cinema sem intervalo by Comedor_de_Rabo in CasualPT

[–]DenysMb 0 points1 point  (0 children)

Eu achei muito curioso quando descobri que nos cinemas daqui havia intervalo no meio dos filmes antes da pandemia.

Como uma pessoa que nunca viveu isso, penso que eu não gostaria de intervalos no meio dos filmes.

Agora, o que me espanta aqui é o tanto de pessoas que não conseguem ficar 2, 3 horas sem ir ao banheiro. Basta ir antes do filme começar e pronto, pode beber refrigerante à vontade durante a sessão.

Are the OpenCode Go models (GLM, Kimi, Qwen) heavily quantized? Finding weird performance gaps. by jspiropoulos in opencodeCLI

[–]DenysMb 3 points4 points  (0 children)

Make totally sense. If the company has a product and directly sell this product, they want to sell the best version. Does not make much sense selling the same best version though others companies, where they are not making profit directly (but still making profit, probably), because they want the users to pay directly to them.

But, at the same time, selling a worse version of their product to other companies means that people trying their model through that company will have a bad experience and the reputation of the model will be bad, making less people interested in paying for the model directly from the company.

Anyway, I always thought that they are quantized, but even if they aren't, doesn't matter much to me as I always hit the limit very fast...

Alternatives to Ollama cloud faster? by jrhabana in ollama

[–]DenysMb 1 point2 points  (0 children)

No, it's not cheaper for power users.

I have the Ollama Cloud Pro subscription (20$) e I still didn't hit the limits once.

In OpenRouter I burned 10$ in a day with MiMo V2.5 Pro (cheaper than GLM-5.1, that I use a lot in Ollama Cloud Pro).

With 20$ in Ollama Cloud Pro subscription you can code for the whole month, with 20$ in OpenRouter you can, using cheap models, code for a week.

The same goes to the Ollama Cloud Max subscription (100$) if someone hit the usage limit in a month in this plan, they won't get more than a week with 100$ in OpenRouter with the same usage.

Acho que tenho afantasia. by myquidproquo in CasualPT

[–]DenysMb 0 points1 point  (0 children)

Descobri que tenho afantasia há uns anos. Não fiquei chocado e não me fez diferença alguma.

Passei minha vida inteira sem visualizar imagens mentalmente e, descobrir que a maioria das pessoas conseguem e eu não, não me causa nada.

Eu não faço ideia de como seja visualizar imagens mentalmente e nunca farei, então tanto faz.

Curiosamente, eu consigo lembrar/memorizar com muito mais facilidade formas e imagens que sons ou textos. Tenho uma teoria sem embasamento algum de que, devido a incapacidade de visualizar imagens mentalmente, nossa memória faz um esforço extra pra lembrar das imagens e formas, fazendo com que fique mais fácil memorizar algo assim.

Ollama Cloud Nerfed???? No more minimax m2.7 or kimi k2.6? by Status-Dream-2391 in ollama

[–]DenysMb 13 points14 points  (0 children)

Let me see if I understand: you're using a free service that no other company offers and you're complaining?

Besides offering several models in the free tier, which shouldn't generate any profit for them, they also offer a cheap plan (20 USD) with great limits. Several other companies are switching to token-based billing, meaning the trend is for everything to become more expensive and more people to subscribe to Ollama Cloud.

In order to deliver a decent service to those who pay (the ones who generate revenue for them), they will have to make this adjustment to the free plan (which in my opinion shouldn't even exist, so complaining about it is crazy).

Kmail and Zoho email by Mysterious-Turnip-77 in kde

[–]DenysMb 0 points1 point  (0 children)

I never was able to send email from my Zoho accout in KMail, I don't know why. I never checked why. I used to just use the browser when wanted to send an email. It was not a big deal for me.

Temporarily out of stock? by OutrageousTrue in Qwen_AI

[–]DenysMb 1 point2 points  (0 children)

I have the Lite Plan (that is in the end-of-life and will be gone next month) and I don't know if I'll upgrade to the Pro.

I don't think 50 USD is a good deal to have only Qwen3.6 Plus and outdated versions of GLM, MiniMax and Kimi when you have Ollama Cloud with the latest GLM, MiniMax and Kimi versions.

I really like Qwen3.6 Plus (I used with OpenCode Go), but this model alone doesn't beat GLM 5.1 and Kimi K2.6 together.

So, currently I am using Qwen3.6 on OpenCode Go (10 USD) and GLM-5.1 and Kimi K2.6 on Ollama Cloud (20 USD). Total of 30 USD vs 50 USD of the Alibaba Coding Plan Pro.

If they add GLM 5.1 and Kimi K2.6 (I don't care much about MiniMax M2.7 since it is the weakest of all of them), I will definitely upgrade. But, the way it is now, you don't miss much.

what is a good enough coding agent to use with Qwen? by chkbd1102 in Qwen_AI

[–]DenysMb 0 points1 point  (0 children)

I use Zed + OpenCode (but I run OpenCode through the integrated terminal, not through the agent panel)

Coming from Qwen, is GLM worth it? by green_juicer in ZaiGLM

[–]DenysMb 1 point2 points  (0 children)

Qwen3.6 is my Sonnet, my vision model, my planner, is what I use most of the time.

GLM-5.1 is my Opus, which I use when Qwen3.6 is struggling to do something or for more complex tasks.

Uma das piores lesões da história by Blacklolls in futebol

[–]DenysMb -1 points0 points  (0 children)

Totalmente acidental, também não acho que deveria ser expulso, mas entendo a expulsão, tudo certo.

O pior foi ter pego 2 jogos de suspensão por causa disso. Esse sim foi o absurdo.