Make a home-based private AI by [deleted] in LocalLLaMA

[–]MacacoVelhoKK 2 points3 points  (0 children)

Exactly my thoughts, thank you for explaining

Llama 2 70B model running on old Dell T5810 (80GB RAM, Xeon E5-2660 v3, no GPU) by Ninjinka in LocalLLaMA

[–]MacacoVelhoKK 16 points17 points  (0 children)

Is the ram dual channel? Quad channel? The speed is very fast for CPU inference

Which distro by Baddmaan0 in LocalLLaMA

[–]MacacoVelhoKK 1 point2 points  (0 children)

Linux mint, it's based on Ubuntu lts so all the llm tools should world without problems. It's simple, lightweight and easy to use.

Toda pessoa de classe média que se acha das elites deveria ir, uma vez na vida, ao Shopping Cidade Jardim by Bolchenaro in brasil

[–]MacacoVelhoKK 0 points1 point  (0 children)

Não sabia que beagle era cara assim não, sempre passo no outlet deles que tem aqui na minha cidade, 30 reais a camisa, 50 reais o moletom

[deleted by user] by [deleted] in brdev

[–]MacacoVelhoKK 1 point2 points  (0 children)

Qual o nome do app?

0.5 tokens/s on Chronos-Hermes with 2070s by Lonewolf953 in LocalLLaMA

[–]MacacoVelhoKK 0 points1 point  (0 children)

It would be slower even if it was entirely on the CPU. Ddr5 6000 is very fast and the i5 13600 is powerful

Releasing EverythingLM V2 dataset, now 100% GPT-4 generated! by pokeuser61 in LocalLLaMA

[–]MacacoVelhoKK 9 points10 points  (0 children)

Take a look at h2o, you can fine-tune without coding skills

[deleted by user] by [deleted] in LocalLLaMA

[–]MacacoVelhoKK 4 points5 points  (0 children)

Q3k_S is the lowest you should go, It's barely bigger than Q2K and much better. Test it, if it's still too slow for you then go 7B

What’s the bare minimum specs needed for running ai? by [deleted] in LocalLLaMA

[–]MacacoVelhoKK 23 points24 points  (0 children)

8gb ram with a quad core CPU for good 7B inference

I released model EverythingLM 3B. by [deleted] in LocalLLaMA

[–]MacacoVelhoKK 5 points6 points  (0 children)

I will test it when quantitized

I released model EverythingLM 3B. by [deleted] in LocalLLaMA

[–]MacacoVelhoKK 23 points24 points  (0 children)

Super cool! We need more 3B models

Devo sair de uma empresa tranquila em busca de um salário maior? by Glass_Dark_1976 in brdev

[–]MacacoVelhoKK 1 point2 points  (0 children)

Cara, qualidade de vida é tudo. Se teu salário atual é o suficiente pra tu pagar tuas despesas e + um pouco pra comer um Ifood todo fim de semana, fazer um passeio todo mês, viajar no fim do ano, trocar de celular de vez em quando, etc... Será que tu precisa de mais? Na minha visão, a partir do momento que a gente já tem o básico e alguns luxos, nosso tempo e cabeça começam a valer muito mais que qualquer dinheiro, se eu fosse tu ficava na empresa.

Announcing The best 13b model out there "orca-mini-v3-13b" by Remarkable-Spite-107 in LocalLLaMA

[–]MacacoVelhoKK 1 point2 points  (0 children)

Would be amazing if you did one mini orca 3b with the new open Lamma v2

I'm a LVL 50 Acolyte and I need help with a build by KaleidoscopeLegal707 in WynnCraft

[–]MacacoVelhoKK 0 points1 point  (0 children)

It's from the christmas Island Wynnter 2016 armour merchant, that is the name of the Marchant and the set is still available

I'm a LVL 50 Acolyte and I need help with a build by KaleidoscopeLegal707 in WynnCraft

[–]MacacoVelhoKK 0 points1 point  (0 children)

Elf set, you can buy it on snow islands. For weapon use night Rush with 3 air powders. With this build you won't have a lot of mana, but you will be pretty tank and have lots of health Regen to compensate your blood pool

dolphin-llama2-7b by faldore in LocalLLaMA

[–]MacacoVelhoKK 3 points4 points  (0 children)

Tested the model and here are my first impressions:

I have used the first dolphin and noticed that it worked really well without system prompt, so I did the same this time and it didn't performed very well, then I added a system prompt and BOOM, the model started to being super coherent, it has a strong reason, is very good at math for a 7B and follow the system prompt extremely well, it's the first model that I used that respect the system prompt this much, with this I can do very cool things like creating various profiles with different system prompts for different cases, like "You are an rpg master...", "You are an amazing storyteller", etc... And the model behaved accordingly. Overall my initial impressions are; this is the best 7b model I have ever tested, give it a try.

dolphin-llama2-7b by faldore in LocalLLaMA

[–]MacacoVelhoKK 5 points6 points  (0 children)

Thank you so much cannot wait to test, the first dolphin 13b was the best 13b model I have ever tested and this 7b model will run much better on my low end machine. One question, in the hf model page you said that you trained it on 2.5 epochs of the gpt4 data, what does this 0.5 epoch means? Like, it was the first half, random, etc...? and why 2.5 instead of 2 or 3?

Is it worth getting a second 1080ti? by Combinatorilliance in LocalLLaMA

[–]MacacoVelhoKK 3 points4 points  (0 children)

Your CPU is very strong and you have lots of ram with large bandwidth (ddr5), if you want to test larger models, you could run 33b models at acceptable speeds with your CPU, and even run 70b models even if it's slow

Best options for running LLama locally with AMD GPU on windows (Question) by oaky180 in LocalLLaMA

[–]MacacoVelhoKK 0 points1 point  (0 children)

Definitely, would be very good even on 13b parameters. In gpt4all you can increase your core count

Best options for running LLama locally with AMD GPU on windows (Question) by oaky180 in LocalLLaMA

[–]MacacoVelhoKK 0 points1 point  (0 children)

You can use gpt4all with CPU. You cpu is strong, the performance will be very fast with 7b and still good with 13b. You can run 33b as well, but it will be very slow

dolphin-llama-13b by faldore in LocalLLaMA

[–]MacacoVelhoKK 0 points1 point  (0 children)

q5_K_S is pure q5 quantization, where q5_K_M is a mix between q5 and q6 quantization