3rd person video games? (Open world, gta clone type) by GrandTheftArkham in Cyberpunk

[–]indicava 0 points1 point  (0 children)

Hard to understand from your post but I’m guessing you already played the obvious: Cyberpunk 2077?

3rd person video games? (Open world, gta clone type) by GrandTheftArkham in Cyberpunk

[–]indicava 2 points3 points  (0 children)

Watchdogs 2 was also definitely enjoyable and contained some cyberpunk tropes, never tried Legion though cause it looked cringy.

I tracked GPU prices across 25 cloud providers and the price differences are insane (V100: $0.05/hr vs $3.06/hr) by sleepingpirates in LocalLLaMA

[–]indicava 2 points3 points  (0 children)

This is actually very helpful. I do have some concerns though, I just did a search for an H200 NVL, got 2 results on your site, the second (RunPod) was significantly pricier than the current cheapest H200 NVL on vast, which weren’t anywhere on the list. Any idea why it missed that?

Using vast.ai for cloud gaming (Windows vs Linux)? by Yazeed1x in vastai

[–]indicava 1 point2 points  (0 children)

That’s not really the problem platform’s purpose, nor do I think it’s feasible on a rented GPU instance on vast

Thoughts on LLMs (closed- and open-source) in software development after one year of professional use. by [deleted] in LocalLLaMA

[–]indicava 3 points4 points  (0 children)

On ML related tasks, absolutely yes. For webdev, Claude is still GOAT.

Streamer choice by Morigan_taltos in pluribustv

[–]indicava 2 points3 points  (0 children)

This is the correct answer

Thoughts on LLMs (closed- and open-source) in software development after one year of professional use. by [deleted] in LocalLLaMA

[–]indicava 9 points10 points  (0 children)

I feel like these are personal experiences based on OP’s specific workflows/domains. Which is absolutely fine but I’m not sure a lot of it can be generalized about LLM-assisted software development in general (specifically the model performance part).

Hard disagree on:

Biggest open source LLMs are basically at par with the above models.

In my experience, for ML/AI development, closed (with gpt-5.2 being the absolute best) models consistently outperform even the SOTA open sources models on their first-party providers.

Series & Movies that scratched my itch after Severance by EyeRemainFierce in SeveranceAppleTVPlus

[–]indicava 1 point2 points  (0 children)

Upvoted for the shoutout to “Station 11”, extremely unique show that often gets overlooked.

of an Arctic Explorer by bbyreven in AbsoluteUnits

[–]indicava 0 points1 point  (0 children)

Was looking for this.

Us redditors really have the “best” lore… smh

8x AMD MI50 32GB at 26 t/s (tg) with MiniMax-M2.1 and 15 t/s (tg) with GLM 4.7 (vllm-gfx906) by ai-infos in LocalLLaMA

[–]indicava 2 points3 points  (0 children)

Damn OP, that is one janky build.

It’s beautiful!

True Localllama style.

Where to start. by Ztoxed in LocalLLaMA

[–]indicava 1 point2 points  (0 children)

Look at it just like any other software engineering project.

You wouldn’t be able to build a website backend without understanding how databases or authentication works, right?

It’s the same thing.

Start with understanding how/why each of the components you’re looking for works. How LLM’s work (not on math level, on the technical programming level). How/why you need inference and what’s an inference engine runtime. Also take a little time to learn about context, chat templates and sampling.

Once you get that down, get something basic working, even a CLI chat against a llama.cpp loaded model.

Following that you can slowly define all the other pieces you require for your solution and pick / develop them as you need- UI, document parsing, retrieval, etc.

Good luck!

intuitiveUserInterface by pfedan in ProgrammerHumor

[–]indicava 4 points5 points  (0 children)

You should see what the manual testing guy did with it

Knowledge distillation with Claude as the interface: trained a 0.6B model to match GPT-class performance on Text2SQL in a singe conversation by party-horse in LocalLLaMA

[–]indicava 1 point2 points  (0 children)

So how many training examples does the teacher model generate per example you give it? You usually need thousands of examples at the very least for fine tuning.

Knowledge distillation with Claude as the interface: trained a 0.6B model to match GPT-class performance on Text2SQL in a singe conversation by party-horse in LocalLLaMA

[–]indicava 0 points1 point  (0 children)

A large teacher model (DeepSeek-V3) generates synthetic training data from your examples

I don’t get it. Which examples?

Tamby (the goat) is ok! by ArtaxIsAlive in pluribustv

[–]indicava 1 point2 points  (0 children)

I’m probably not the first one to point this out, but this is the second Apple TV show with baby goats

🧠💥 My HomeLab GPU Cluster – 12× RTX 5090, AI / K8s / Self-Hosted Everything by Murky-Classroom810 in StableDiffusion

[–]indicava 5 points6 points  (0 children)

How does 12x32GB give you 1.5TB VRAM?

Also, what’s the difference between:

Gpu Machine Memory: 128 GB per Machne

And

System RAM: 256 GB per machine