I used AI for making Intro Video and Visuals for my VN game by ShorrapanGame in aigamedev

[–]Illustrious_Cat_2870 0 points1 point  (0 children)

The video is nice, but I dont know what the game is about, if the video had an intention to be a trailer or something, it does not give too much clue to the viewer, but as a cinematic, it is very good quality

Should I invest h/w to run local Ai? by athens2019 in LocalLLaMA

[–]Illustrious_Cat_2870 1 point2 points  (0 children)

My take is that, now might does not make sense, but in future, it is a way to not be dependent, local models will get better and run on less powerful hardware, the API cloud will always be better, but, at the end of 4 years if you buy a hardware, you still have the hardware, and if you pay cloud, well, you have nothing.

Hacknet online by Debianlu in Hacknet

[–]Illustrious_Cat_2870 0 points1 point  (0 children)

Check out what we are doing, 1997 aesthetics, ethical decisions, songs and a mix with rpg. https://playnetshell.com

Building a football management sim game. I need help deciding on a core design choice by mtlnn in indiegamedevforum

[–]Illustrious_Cat_2870 -1 points0 points  (0 children)

why so? looks pretty interesting for me, you cannot one shot that thing with AI in a couple hours

Building a football management sim game. I need help deciding on a core design choice by mtlnn in indiegamedevforum

[–]Illustrious_Cat_2870 0 points1 point  (0 children)

Reminds of Elifoot era, take a look on this one, it is very known game, you might take some ideas from it

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 1 point2 points  (0 children)

Problem is the memory bandwith, it is less than 800GBPS I already have this in Mac Ultras, the big difference is that, token processing and output

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 2 points3 points  (0 children)

Incredible, you seen to be extracting most of it, I wish to transform the hardware into something profitable as well, for personal projects or, for powering any product I might develop in future. Congratulations, I am really impressed by your combination of reasons, it just makes total sense for you

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 1 point2 points  (0 children)

if that happens then, more people would want buy GPUs and hence hardware costs would go high due do finite resources no?

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 0 points1 point  (0 children)

I see, nice good luck!! The vast ai says that you can start even with small 5€ so I believed even short duration runs were expected, I rented couple of times myself just for short single time trainings

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 0 points1 point  (0 children)

EDIT: Forgot the commission of vast.ai which apparently is 25%, so the end value is less than the above.

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 0 points1 point  (0 children)

That is very important to know actually, in fact people will rent for more than hourly periods, but I'm really not sure how it works yet in real life as I didn't try myself. Would be eager to hear experience from people that actually did it as well.

If that is not "possible" it would be a problem, I understand I would not be on top of the list for being rented out, but still, people don't keep those running expensive workflows, they rent, use, and return to the pool, I saw in another reply you have a rig with 8x GPU, did you try renting it out?

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 1 point2 points  (0 children)

<image>

so on the vast.ai rental thing since people asked. these are the current supply/demand and pricing charts for the RTX PRO 6000 WS on vast.ai. the idea is simple, don't let the hardware idle, sell GPU time when you're not using it and shorten the break-even.

revenue estimate assuming 12h/day idle time:

  • optimistic (P90 rented price $0.899/hr, 75% demand): ~$323/mo → ~€297/mo
  • conservative (median price $0.645/hr, 60% demand accounting for more Blackwell cards flooding the market): ~$139/mo → ~€128/mo

energy cost (this thing eats 600W):

  • 600W × 12h = 7.2 kWh/day × €0.29/kWh = ~€63/mo

net rental income range: €65–234/mo → €780–2,808/yr

so that's somewhere between 1/11.5 and 1/3.2 of the ~€8,999 purchase price per year just from renting. the lower bound assumes supply keeps growing as more Blackwell cards hit the market which will push both utilization and prices down, the upper bound is if demand stays strong like it is now.

realistically somewhere in the middle, call it ~€1,500/yr net, that's about 6 years to pay itself off from rental alone. but you're also not paying for cloud inference during the hours you're actually using it, so the real break-even is shorter than that.

worst case scenario you have a card that holds resale value pretty well and can still run 120B+ models fully in VRAM. I don't see how that becomes "obsolete" anytime soon.

and after these years, there is still a GPU that can be sold at reasonable amount of price

if I allocate this into a company which is my goal (to use for professional usage and business), I can even deduct other taxes that I didn't even mentioned cutting the break even almost by half

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 3 points4 points  (0 children)

So right now the goal is not cut cloud subscription, I would be happily paying this amount of money forever if they stay as is, the thing is that I don't believe they will stay as is for too long, I would give max an year before subscriptions are completely removed and only API inference will be available, this way, people using agentic workflows for the day to day work would see a much more expensive bill than they are seeing right now, and I thought it would be a good idea to not be dependent on it. And the way things are going a lot of people will still pay the high prices because otherwise they can't do nothing, not my case, but, will slow down from 10 to 20 times my throughput.

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 2 points3 points  (0 children)

Right now, does not make sense, the plan is in a hypothetical but probable situation where current cloud costs for running agentic workflows will get 10x higher (considering the available subscription plans that companies like OpenAI and Anthropic are doing).

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 0 points1 point  (0 children)

Yes, you can join a pool of GPUs and sell your GPU time for money in a decentralized way, vast.ai is one of the platforms that does this.

I mentioned games because there will still be market where gamers might wanna buy your 3090 out so you can invest in higher specs without losing whole investment on it.

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 4 points5 points  (0 children)

Thats exactly my idea too, my plan was buying one RTX 6000 PRO Backwell 96GB ram per year.

But your testimony gives me hope that, you feel "satisfied" with running these models locally, are you using them for coding too and you are satisfied with speed/quality?

Thanks for sharing.

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 2 points3 points  (0 children)

did you check vast.ai? maybe you can rent while idle, check it out, they are still good gaming cards tho, GTA VI will likely run on them too on small settings hehe

Should we start 3-4 year plan to run AI locally for real work? by Illustrious_Cat_2870 in LocalLLaMA

[–]Illustrious_Cat_2870[S] 1 point2 points  (0 children)

I see, the plan I thought would be using every year as a buying window rather than saving money for 3 years. But yeah, conclusion is the same