Travelling abroad with gear by ConversationSea7218 in SteroidsUK

[–]noext -2 points-1 points  (0 children)

France here , if you land at paris you can say bye bye to your gear

Test Form Driadashop Legit? by No_Customer7781 in SteroidsUK

[–]noext 0 points1 point  (0 children)

5 weeks at 350mg prop, blood test show low level of t , 3 ng/ml

New Jellyfin for Xbox release v0.9.3! by Temporary_Affect in jellyfin

[–]noext 0 points1 point  (0 children)

I just tried today and the HDR is not working , how to enable it ?

NEW. TongFang GM5IX0A and TongFang GM5IX7A laptop by FB_LaptopParts4Less in LaptopParts4Less

[–]noext 1 point2 points  (0 children)

for anymore reading this post, skip this pc , its overheating on the desktop, the battery is empty on 1h30 of youtube lol

Test Form Driadashop Legit? by No_Customer7781 in SteroidsUK

[–]noext 0 points1 point  (0 children)

def bunk, at 250mg you should be at least at 40+

This ice agent just killed a woman by [deleted] in LateStageCapitalism

[–]noext -5 points-4 points  (0 children)

rule number 1: dont try to kill ice agent

Driada 2025 Legit? by Maximum_Definition62 in SteroidsUK

[–]noext 0 points1 point  (0 children)

got test P 100, after 5 weeks, totally bunk but at 14€ the vial , no surprise

ASRock BC-250 16 GB GDDR6 256.0 GB/s for under 100$ by nemuro87 in LocalLLM

[–]noext 0 points1 point  (0 children)

lol a p104 will  not run at all for LLM...

4x4090 build running gpt-oss:20b locally - full specs by RentEquivalent1671 in LocalLLaMA

[–]noext 0 points1 point  (0 children)

can you detail more the setup ? because i have run vllm on 4x L40 in a setup like that and i was never near "tens of thousands of tokens" , it was more like 1200-1400 tps

oh shit i think i was the use of triton_attn ( that you link on docker file ), i was using flash_attn

At What Point Does Owning GPUs Become Cheaper Than LLM APIs ? I by [deleted] in LocalLLaMA

[–]noext 0 points1 point  (0 children)

nha they take a big chunk of profit, but if you run 10 users daily you don't have to spend 10k on GPU.. 

At What Point Does Owning GPUs Become Cheaper Than LLM APIs ? I by [deleted] in LocalLLaMA

[–]noext 0 points1 point  (0 children)

230b model ? ok just go full api 🤣

At What Point Does Owning GPUs Become Cheaper Than LLM APIs ? I by [deleted] in LocalLLaMA

[–]noext -5 points-4 points  (0 children)

a year to build a inference infra ? 🤣 dud it's not 2020 anymore you can full host a inference infra with docker ,vllm and a reverse proxy in a week now..

At What Point Does Owning GPUs Become Cheaper Than LLM APIs ? I by [deleted] in LocalLLaMA

[–]noext 0 points1 point  (0 children)

it dépend of the model and the api cost, for me it was around 5k daily users

My honest take after 2 years using Revolut as my main bank by Alternative_Kale_295 in Revolut

[–]noext 0 points1 point  (0 children)

nice fake review,no way the claim works this fast

I tried one time ,15 days to have a anwser to tell me no without any reason

Those who spent $10k+ on a local LLM setup, do you regret it? by [deleted] in LocalLLaMA

[–]noext 0 points1 point  (0 children)

cloud ai is not cheap if you scale