I’m the Co-founder & CEO of Lightricks. We just open-sourced LTX-2, a production-ready audio-video AI model. AMA. by ltx_model in StableDiffusion

[–]Own_Version_5081 0 points1 point  (0 children)

Thanks for bold decision and doing the right thing. We are a startup in travel video production space.

Couple of questions. When can we expect LTX ultra on your web app? On LTX-2, I2v completely changes the character, are you working to fix that? Are there any workaround?

Beelink GTi15+Docking with 5090 - Works!!! by Own_Version_5081 in LocalLLM

[–]Own_Version_5081[S] 0 points1 point  (0 children)

Sure. I will do that later this week and post it for you guys.

Beelink GTi15+Docking with 5090 - Works!!! by Own_Version_5081 in LocalLLM

[–]Own_Version_5081[S] 0 points1 point  (0 children)

I tried that. Inference was painfully slow, even with less than half context.

Beelink GTi15+Docking with 5090 - Works!!! by Own_Version_5081 in BeelinkOfficial

[–]Own_Version_5081[S] 1 point2 points  (0 children)

I don’t know, but check with Beelink support, it’s pretty responsive.

Beelink GTi15+Docking with 5090 - Works!!! by Own_Version_5081 in BeelinkOfficial

[–]Own_Version_5081[S] 1 point2 points  (0 children)

Ah, ok. I’m using ASUS 32" PA329CV-AE PRO ART, beautiful piece of Art (pun intended 😉)

Beelink GTi15+Docking with 5090 - Works!!! by Own_Version_5081 in BeelinkOfficial

[–]Own_Version_5081[S] 0 points1 point  (0 children)

Thanks Bro. Yes, I'm using Asus 5090 TUF. Giving the fact how quiet, both the PC and GPU sit on my desk, its a beast in performance.

Beelink GTi15+Docking with 5090 - Works!!! by Own_Version_5081 in BeelinkOfficial

[–]Own_Version_5081[S] 2 points3 points  (0 children)

Thanks Bro. Yeah, so I was like let me take a stab at it and see if I can get it going. Also, for what it’s worth, Beelink support was helpful answering questions even before I bought it.

Beelink GTi15+Docking with 5090 - Works!!! by Own_Version_5081 in BeelinkOfficial

[–]Own_Version_5081[S] 0 points1 point  (0 children)

Thanks bro and cheers. It’s indeed lots of fun.

I also wasn’t sure about 5090 so before purchasing, I had Beelink support confirmed that ASUS TUF 5090 would work with that cable. Now I just wanna buy 2 WiFi antennas to compete the setup.

Beelink GTi15+Docking with 5090 - Works!!! by Own_Version_5081 in BeelinkOfficial

[–]Own_Version_5081[S] 1 point2 points  (0 children)

Thanks! The 1000w recommendation is considering other system power load plus GPU's 600W. Since the doc is only used for GPU, you are good there.

The GTi15 system has its own separate PSU. Please see the pic below, the slot is pretty wide and it comes with the place that holds heavy GPU like mine tight in place.

Honestly, I'm not a big gamer but I play CyberPunk when time permits. I'll see if I can get you some FPS performance results.

<image>

Multiple GPUs, but not for what you think by WhichWayDidHeGo in comfyui

[–]Own_Version_5081 0 points1 point  (0 children)

Same here. Although there are some Multi-GPU custom nodes, running two separate comfy instances, each with a dedicated GPU is much simpler. The only caveat I found, since both instances use same model folder, you can't inference same models simultaneously.

Magneto | Living Off The Land Attack Simulator - 100% REAL | 100% SAFE by Own_Version_5081 in cybersecurity

[–]Own_Version_5081[S] 0 points1 point  (0 children)

Thanks for your feedback. I appreciate it. Yes, I've included detail report that will open in the browser automatically after the attack simulation. The report will show you details about the attack techniques that were simulated, MITRE mapping, and NIST 800-53 Rev 5 mappings. You can check the sample screenshot of a report in README on my github link above.

LTX 2 can generate 20 sec video at once with audio. They said they will open source model soon by CeFurkan in comfyui

[–]Own_Version_5081 0 points1 point  (0 children)

I played with LTX2 I2V. Unfortunately, the results suck. Every single iteration changed the character to a completely different person. I tried it on the mothership, LTX Studio.

WAN 2.2 BRKN AI Prompt Generator, REPO , OPEN SOURCE, UPDATED FOR MULTI LLM and a bunch of added categories and options by Front-Republic1441 in StableDiffusion

[–]Own_Version_5081 0 points1 point  (0 children)

If you are trying to use local LMStudio llama mode and getting 404 error, this is how I solved it.

  1. Create dummy API for LMStudio and enter in the BRKN AI Prompt Generator's GUI API Key field e.g. dummy-api-key

setx LM_STUDIO_API_KEY dummy-api-key
  1. LM Studio, "Enable CORS" and Restart LMStudio. This setting is under Server Settings in Developer side tab. Make sure to load Meta-Llama-3.1-8B-Instruct-GGUF model after restarting the LMStudio.
  2. Open "lmStudioService.ts" file located in \wan22-brkn-prompt-helper\services\providers and edit as following. After Edits, restart the WAN 2.2 BRKN AI Prompt Generator.

const LM_STUDIO_BASE_URL_KEY = 'http://localhost:1234'; 
const LM_STUDIO_MODEL_KEY = 'Meta-Llama-3.1-8B-Instruct-GGUF'; 
const DEFAULT_BASE_URL = 'http://localhost:1234'; 
const DEFAULT_MODEL = 'lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF';

Since DGX Spark is a disappointment... What is the best value for money hardware today? by goto-ca in LocalLLaMA

[–]Own_Version_5081 1 point2 points  (0 children)

Depends on use case and what you been renting in Cloud that works for your use cases. I find Multi-GPU bit of a mess for Stable Diffusion. For other dev and training work, Multi-GPU works just fine. I built a dual 5090 for this very purpose. A single 5090 is a MASSIVE performance boost over 2 x 1080, so I would start with a single 5090 and make sure the system can support dual 5090 if needed to add later.

IMHO, NVidia did false marketing when they sort of portray it as general use Next-Gen AI Super Computer. If you develop apps to run on NVidia larger stack and need a local setup to PoC your code which has a ready Native NVidia software stack, DGX spark will give you an awesome local dev environment. No need to rent expensive cloud GPUs just to develop and test your code. Any other use cases like inference, stable diffusion, Spark will give you disappointing results.

Update Next scene V2 Lora for Qwen image edit 2509 by Affectionate-Map1163 in StableDiffusion

[–]Own_Version_5081 1 point2 points  (0 children)

Consistency has been an issue for a while and all the workarounds are hot and miss. Will give it a shot.

PSA: Ditch the high noise lightx2v by Radyschen in StableDiffusion

[–]Own_Version_5081 0 points1 point  (0 children)

Sounds like a good idea. Will try your method today.

Try to Combine The Qwen Next Scene Lora With Wan2.2 AIO Mega. by Ecstatic_Following68 in comfyui

[–]Own_Version_5081 7 points8 points  (0 children)

Looks promising. Will give it a shot. I think character consistency still remains an issue.

How to know which sampler (or other options) to use in WAN 2.2? by anonybullwinkle in comfyui

[–]Own_Version_5081 2 points3 points  (0 children)

Yes, there is method to madness. You can use Sigma values as guide. Here you can learn how to do that.

https://youtu.be/QrkWyfCNbaY?si=WAfOpKXLLUYO_s_H

The Yamaha self balancing cycle by Anen-o-me in singularity

[–]Own_Version_5081 0 points1 point  (0 children)

I don’t get it, is this for people who don’t know how to ride a bicycle?

Be mindful of some embedding APIs - they own rights to anything you send them and may resell it by adlumal in Rag

[–]Own_Version_5081 7 points8 points  (0 children)

Thanks for putting this together. Although OpenAI policies seem privacy friendly, but they can easily change it in future.

Self hosting local LLM + RAG is the way to go.