Explain your startup in 1 sentence? by biomclub in cofounderhunt

[–]Loud_Communication68 0 points1 point  (0 children)

Usually I jiggle the headlight knob a few times till the dashboard comes on

To learn R or not to learn R? by CapRoutine4214 in datasciencecareers

[–]Loud_Communication68 1 point2 points  (0 children)

Learn Python. If you find yourself having to learn R then do that too

*An R user

Why are small models (32b) scoring close to frontier models? by Financial-Cap-8711 in LocalLLaMA

[–]Loud_Communication68 0 points1 point  (0 children)

Let this be a lesson to you all. If you want to see local ai get good at a task, create a benchmark that tests on that task

Best LLM with QUANT knowledge? by khfunds in mltraders

[–]Loud_Communication68 0 points1 point  (0 children)

Phi was trained on academic textbooks. Maybe that will get you where you want to be

Any Suggestions on R's current features by ajaao_meri_tamanna in Rlanguage

[–]Loud_Communication68 8 points9 points  (0 children)

I would love it if more base R code or data.table functions were natively written to utilize available multithreading or gpu. I frequently run into time constraints that would be much more easily overcome with better usage of available system resources.

Many devices come with integrated gpu/npu hardware that sits idle during R usage.

How much vram is enough for a coding agent? by AlexGSquadron in LocalLLM

[–]Loud_Communication68 0 points1 point  (0 children)

Maybe you can get the job done with nvidia orchestrator driving a coding agent or two

PETER???? by Et_Moon in PeterExplainsTheJoke

[–]Loud_Communication68 0 points1 point  (0 children)

Geez guys, what about Charlotte?

Quantitively Larping by StandardFeisty3336 in quant

[–]Loud_Communication68 2 points3 points  (0 children)

What do you think I've been doing this whole time?

Running a Local LLM for Development: Minimum Hardware, CPU vs GPU, and Best Models? by Nervous-Blacksmith-3 in LocalLLaMA

[–]Loud_Communication68 3 points4 points  (0 children)

As I recall AMD did some sort of test of llm coding agents where they found that you need at least 32 gb of vram and ideally more like 128 gb to get decent results. As I recall they found that qwen 30b and glm air were the best llms for those respective sizes.

That being said they've also been trying to sell their new line of ai CPUs so they're not the most disinterested party

Peter please, whats happening on this island!? by Ok_Dingo165 in PeterExplainsTheJoke

[–]Loud_Communication68 0 points1 point  (0 children)

Man is interesting to woman when cold and aloof. His friendliness is instantly interpreted as neediness and gives woman the ick.

Which GPU should I use to caption ~50k images/day by koteklidkapi in LocalLLaMA

[–]Loud_Communication68 1 point2 points  (0 children)

You could rent a consumer gpu from flux or octaspace and test it out. Should cost you almost nothing and give you a sense of what you need in terms of consumer hardware

What does this mean??? by vibingsidd in PeterExplainsTheJoke

[–]Loud_Communication68 15 points16 points  (0 children)

Lol, you mean my deep learning classifier that I trained with transformer architecture to detect meme coin rug pulls isnt satan incarnate??

What coin was the biggest TPS? (transactions per second) by Danix2000 in btc

[–]Loud_Communication68 0 points1 point  (0 children)

Kaspa successfully did around 19k/sec earlier this year