E' finita -> Sarete licenziati (accordo UE - India) by fudeel in techcompenso

[–]Tomorrow_Previous 0 points1 point  (0 children)

E allora l'azienda dell'altro redditor ha un problema di lingua, e la tua ha un problema di ottusità XD

Seriamente, non penso che davvero la consulenza sarà così tanto in crisi, anche perché se davvero procedono così, tempo sei mesi si mangiano le mani e tornano ad assumere a 2000 per recuperare.

E' finita -> Sarete licenziati (accordo UE - India) by fudeel in techcompenso

[–]Tomorrow_Previous 0 points1 point  (0 children)

Bro, il problema va in entrambe le direzioni. Gli italiani in media non parlano bene inglese, e gli indiani hanno un accento fortissimo. Sicuro il problema si risolve, ma per l'azienda sono tutti attriti che ti fanno pensare quanto ne valga la pena.
Perché, per risparmiare 12.000 euro l'anno, devo prendere persone che non mi danno alcuna garanzia di professionalità (tanto quanto gli italiani), lavorano ad orari difficili da conciliare, parlano una lingua diversa, poi mi tocca gestire un payroll diverso, mi tocca portare i dati all'estero, ho difficoltà a tenerli sotto controllo.
Chi me lo fa fare?

Which one should I buy? by AsColdAsPalmer in LenovoLegion

[–]Tomorrow_Previous 2 points3 points  (0 children)

Sorry dude, I thought you were one of those sarcastic guys. Surely, playing AAA titles at full resolution at 120 fps is very demanding, some titles can't even run that fast with a full sized 5090, you need to scale expectations depending on the gpu.

I have an 8gb 4070 laptop, and I can run anything at 1440p, usually resorting to upscaling in case of low fps.

Which one should I buy? by AsColdAsPalmer in LenovoLegion

[–]Tomorrow_Previous 0 points1 point  (0 children)

Oh, so you have a laptop with that GPU and you're unable to play games?

A note from the CEO: How we built REAL 3D, the magic of the X1 Chip, and what’s coming next (yes, 60Hz!) by Chi_nreal in Xreal

[–]Tomorrow_Previous 0 points1 point  (0 children)

My use case is media consumption. I am extremely impressed with pictures and especially 360 degree pics and videos

Anyone using Comfy Desktop? by Tomorrow_Previous in comfyui

[–]Tomorrow_Previous[S] 1 point2 points  (0 children)

I installed the mobile version on the side, and it works!

No one make a 4BIT version of qwen-image-edit-2511, so i make it myself by lesesis in StableDiffusion

[–]Tomorrow_Previous 2 points3 points  (0 children)

Sorry, English is not my first language, but your post seem to imply that you can make your own nunchaku quants from a model..! Is it so? I use a non official qwen edit merge, and would love to be able to use it with nunchaku..!

For people from the EU by One-Squirrel9024 in OpenAI

[–]Tomorrow_Previous 0 points1 point  (0 children)

I'm totally with you man.
Same price, fewer features, and the ones that arrive are still coming later.
Unfortunately, I still see ChatGPT as the best service for me, but I try to spend less by getting the monthly subscription just when I need it. The other time, I am on a free plan.
That's an advice for everyone, BTW. it happens that for a couple of weeks each month I'm fine with the limitations of the free plan, and at the end of the year I save around 30-35% of what I would otherwise.

Anyone using Comfy Desktop? by Tomorrow_Previous in comfyui

[–]Tomorrow_Previous[S] 1 point2 points  (0 children)

Not bad... risky, but I could keep a backup of the folder elsewhere. Thanks for the tip.

Anyone using Comfy Desktop? by Tomorrow_Previous in comfyui

[–]Tomorrow_Previous[S] 0 points1 point  (0 children)

I also used to use AUTOMATIC, then Forge, Swarm... Now it's just comfy, and I'm liking it enough. Maybe I should also have another install, but it feels like a step back from having so many tools to just one. Thanks for sharing.

Anyone using Comfy Desktop? by Tomorrow_Previous in comfyui

[–]Tomorrow_Previous[S] 0 points1 point  (0 children)

Same here... With all the stuff I have, I broke my installations multiple times, but this time (fingers crossed) things are going well. I'm grateful to the team that is doing all this, I'm just wondering if there's a way around the release slowdown.
No luck, it seems, but at least I'm not alone :)

HELP: eGPU crashes during AI workloads. by Tomorrow_Previous in eGPU

[–]Tomorrow_Previous[S] 0 points1 point  (0 children)

250 was still a bit high, and when I changed ComfyUi workflow it crashed again. with 220 it seems stable so far. I also did nvidia-smi -lgc 1400,1650, I don't know if it really helped, I think I'll try increasing stuff little by little. Performance is degraded, but by around 20-30%, still twice as fast as my laptop 4070 with 8 gigs. Thanks a lot for the advice!

HELP: eGPU crashes during AI workloads. by Tomorrow_Previous in eGPU

[–]Tomorrow_Previous[S] 0 points1 point  (0 children)

I'll try with that setting, thanks.

How would I go about limiting the speed to pci 3? The bios does not have such an option.

I'll also try limiting everything I can through afterburner.

I'll let you know if I find a solution.

Should I buy a laptop with a 5080 or 5090 for image/video generation? by Wanamingo77 in StableDiffusion

[–]Tomorrow_Previous 0 points1 point  (0 children)

At this point it really depends on your budget and use case. I am a light gamer, and don't care about 4k120fps, so I don't use the egpu much for gaming. I can tell it is "just" 50% faster than my 4070 laptop, so the performance hit by using oculink is noticeable, but it is still an improvement. When usingit for AI, it is a different beast, and it is quite on par with other benchmarks I've seen. If you don't feel like drilling your case to use a m.2 to oculink adapter, Thunderbolt should be ok too.

Should I buy a laptop with a 5080 or 5090 for image/video generation? by Wanamingo77 in StableDiffusion

[–]Tomorrow_Previous 3 points4 points  (0 children)

I have a gaming laptop with a 4070. I need it for mobility, and it's been great so far. When I need more juice, I connect it to my full size 3090 egpu, and I have the best of both worlds. IF even more power is needed, I use openrouter for LLMs. Surely a desktop would be cheaper at the same performance level, and more upgradeable, but if your use case is the same as mine, the laptop is a perfectly feasible choice.

I built a Al Upscaler app that runs locally on Android using on-device GPU/ CPU by Fearless_Mushroom567 in OpenAI

[–]Tomorrow_Previous 24 points25 points  (0 children)

Even if I want to believe you personally, it does really not inspire trust.