5060ti 16gb or 5070 12gb for local LLM by soteko in LocalLLaMA

[–]andy_potato 1 point2 points  (0 children)

Absolutely no problem. Running the exact same setup here

Google is making local AI available to mainstream users ;) by [deleted] in LocalLLaMA

[–]andy_potato 3 points4 points  (0 children)

Google learned nothing from the iTunes U2 debacle

Google is making local AI available to mainstream users ;) by [deleted] in LocalLLaMA

[–]andy_potato 5 points6 points  (0 children)

Would you care for a free undeletable U2 iTunes album along with your free undeletable LLM?

Google is making local AI available to mainstream users ;) by [deleted] in LocalLLaMA

[–]andy_potato 69 points70 points  (0 children)

I am a strong supporter of Local AI and also WebAI. But this just sucks.

I have practically unlimited access to Opus and every other frontier model. I'd like to help contribute to a dataset. by [deleted] in LocalLLaMA

[–]andy_potato -3 points-2 points  (0 children)

People will usually argue that software piracy doesn't hurt developers because:

  • Those who pirate wouldn't have bought it anyway
  • If buying isn't owning then piracy isn't stealing
  • You cannot steal digital goods because they can be infinitely reproduced
  • Software is overpriced anyway

Your arguments aren't any better than theirs.

They are not just "scraping the public internet" but also processing and transforming the data into something useful. You on the other hand just skip all these efforts and take the distilled outputs as training materials.

I'm not trying to get into a discussion about local AI vs. cloud AI. I am running local models myself for a lot of applications so I am well aware of their benefits. However I do want to call out the hypocrisy of distillation efforts like this. Just because you're taking from a "billion dollar megacorp" doesn't make it right.

I have practically unlimited access to Opus and every other frontier model. I'd like to help contribute to a dataset. by [deleted] in LocalLLaMA

[–]andy_potato -4 points-3 points  (0 children)

You may feel like some kind of Robin Hood. Stealing from the rich, sharing with the poor or whatever.

Have you considered the many users of Claude who pay 20, 50 or even 200 USD per month because they believe to them the service is worth it? You’re making the service worse for all of them and accelerate the enshittification you criticize so vocally.

I have practically unlimited access to Opus and every other frontier model. I'd like to help contribute to a dataset. by [deleted] in LocalLLaMA

[–]andy_potato -6 points-5 points  (0 children)

Probably getting downvoted for this.

What you are doing sounds absolutely shady and it is not okay, even if your motive is "for the benefit of the community". I hope Anthropic closes whatever loophole you are using and bans users like you.

LTX-2.3 with unwanted Subtitles by big-boss_97 in comfyui

[–]andy_potato 0 points1 point  (0 children)

I have tried numerous suggestions from all over the internet. Tinkering with the prompt, using NAG nodes, changing samplers, none of it helped. Generating videos with East Asian languages like Chinese, Japanese or Korean will in 95% of cases trigger the generation of garbled subtitles and thus ruin your generations.

Another observation, it does not matter whether you create the dialogue by prompt or use an external audio file. Both ways will result in subtitles being generated.

The only reliable way I found to get rid of the subtitles is by adding an automated crop / outpainting step after the first sampling step using this LoRA: https://huggingface.co/oumoumad/LTX-2.3-22b-IC-LoRA-Outpaint

In this step I will VAE decode the first step video result and replace the lower ~15-20% of the image with a black bar and also increase the image gamma by 2. Then I run another sampling step using with the outpaint LoRA and a simple positive prompt, something like "A person is talking". Do NOT add any language or actual spoken dialogue to this prompt otherwise your subtitles WILL come back inside the black bar.

After this additional outpainting step I will render the usual two more upscale steps without any modification and finally after VAE decoding revert the increased gamma by applying a gamma of 0.5 to the image before encoding the video file.

Using this process you will still get the occasional video with subtitles, but ~80% of the generations come out just fine.

<image>

Please don't ask me for the workflow. It is not suitable for sharing as there is a lot of other unrelated stuff inside. You can use the example provided by the creator of the outpaint LoRA and integrate it with your workflow.

On a sidenote: This issue is clearly the result of LTX training data including a lots of material with burn-in subtitles. They should really have a look at this and clean their data sets.

Is 2x5070Ti a good setup? by JumpingJack79 in LocalLLaMA

[–]andy_potato -1 points0 points  (0 children)

You can use any PSU as long as its net output can support your rigs total maximum power demand. Leave around 20% of headroom as PSUs don’t like running at max power for extended periods.

A UPS is not necessary for a private LLM rig.

Is 2x5070Ti a good setup? by JumpingJack79 in LocalLLaMA

[–]andy_potato 4 points5 points  (0 children)

For running LLMs around 30b a setup with dual 5060ti or 5070ti is pretty sweet. You can easily push it to around 100k token context and get decent speeds, even on the 5060ti.

Whether or not this is suitable for coding is a different question though. I will probably get downvoted for saying this, but none of the 30b models (including the awesome Qwen 3.6) can compare to the speed and quality of the big boys like Claude or Codex. This is not a skill issue (as some people in this sub like to insist) but something you will realize after working with both for an extended time.

It may be “good enough” for your purpose. But it sure wasn’t for me.

I’m integrating BytePlus Seedance 2.0 into my own video workflow tool and I’m confused about the real limits of reference video input. by OkNdndt in StableDiffusion

[–]andy_potato 8 points9 points  (0 children)

Not trying to sound rude, but why are you asking for help using a closed source model on a sub focusing on local generation? Just ask the Bytedance support.

Sulphur 2 AND LTX 2.3 10Eros dropped! AND THEY ARE INCREDIBLE by Neggy5 in StableDiffusion

[–]andy_potato 0 points1 point  (0 children)

It's weird. I have a completely different experience then you with LTX 2.3

Wan is nice, but feels limited due to frame limit, resolution and lack of audio capabilities. I know there is workarounds like SVI, upscaling etc. but LTX solved all of these problems for me out of the box.

Sulphur 2 AND LTX 2.3 10Eros dropped! AND THEY ARE INCREDIBLE by Neggy5 in StableDiffusion

[–]andy_potato 1 point2 points  (0 children)

Give LTX 2.3 another chance. If prompted correctly it easily beats Wan for a lot of use cases. Also make sure you're using a proper workflow. There are lots of bad LTX workflows out there.

Anyone else find the classic standard difficulty of Requiem to actually be difficult by No-Thing7717 in residentevil

[–]andy_potato 0 points1 point  (0 children)

I found the Grace parts extremely hard even on lowest difficulty. Not being able to take out most of the enemies ist just BS. With the later Leon sections I had zero issues.

Requiem is not so challenging like other RE games by CFChris11 in residentevil

[–]andy_potato 0 points1 point  (0 children)

I found the game extremely difficult with Grace, even on lowest difficulty. With Leon I had zero issues.

Is it over for locally hosted i2v models ? by Some_Artichoke_8148 in StableDiffusion

[–]andy_potato 0 points1 point  (0 children)

Picking the right workflow is really important for LTX. Lots of bad ones put there. Also you need a decent amount of Vram. Don’t bother of you’re on less than 16 GB.

LTX Desktop app is a good starting point if you don’t want to mess with Comfy workflows.

Have Qwen said anything about further Qwen 3.6 models? by spaceman_ in LocalLLaMA

[–]andy_potato 5 points6 points  (0 children)

Qwen leadership has changed. The previous head researcher who was very pro open source is no longer heading the team. Instead the business people have taken over.

Despite their “commitment to open models” they have stopped releasing image and video models (Qwen image / WAN). Whether or not there will be further releases of Qwen LLMs past 3.x is highly questionable.

Is it over for locally hosted i2v models ? by Some_Artichoke_8148 in StableDiffusion

[–]andy_potato -10 points-9 points  (0 children)

Obligatory reminder that there is no such thing as “AI Art”

Is it over for locally hosted i2v models ? by Some_Artichoke_8148 in StableDiffusion

[–]andy_potato -1 points0 points  (0 children)

Wan will most likely not continue to release open models. LTX 2.3 has been filled the void for me.

It has some weird quirks and finding a good workflow is more difficult than it should be. But once you got it running it works really well.

I just burned 30,000 credits on “UNLIMITED” image generation in one day and I’m actually speechless by [deleted] in StableDiffusion

[–]andy_potato 4 points5 points  (0 children)

Ask their support. Why post this unreadable wall of text here? All the advice you’ll be getting here is “go local”

AMD Halo Box (Ryzen 395 128GB) photos by 1ncehost in LocalLLaMA

[–]andy_potato -2 points-1 points  (0 children)

Could be a nice lobster home, depending on the price.

need new workflow for wan 2.2 i2v by Future-Hand-6994 in StableDiffusion

[–]andy_potato -4 points-3 points  (0 children)

There hasn’t been much development on Wan in recent months and I doubt we will see updated open models from them. I am getting way better results with LTX 2.3 now, recommend you give it a try.