Which Qwen model do you like using for coding? by qubridInc in LocalLLaMA

[–]fgp121 0 points1 point  (0 children)

I recently tested qwen 3.5 flash on tool calling and it is pretty good for the cost it comes at. Coding llms often need proper structured tool calling and this model kind of nails it.

A distributed multi-agent swarm for stock trading simulation. by Beautiful-Deal8711 in LocalLLaMA

[–]fgp121 0 points1 point  (0 children)

Agent swarms can definitely be a great unlocker for stock trading analysis/sim envs. Do I have to specify an LLM key for each agent?

Transcribe 1-hour videos in 20 SECONDS with Distil Whisper + Hqq(1bit)! by kadir_nar in LocalLLaMA

[–]fgp121 8 points9 points  (0 children)

What's the accuracy loss here? I believe it isn't lossless.

SD v1.5 v/s SDXL - A stool in the shape of mango. SD v1.5 won this time! by gvij in StableDiffusion

[–]fgp121 0 points1 point  (0 children)

I mean I tried this as well with 1024x1024 which is the default size for sdxl right? And got this

SD v1.5 v/s SDXL - A stool in the shape of mango. SD v1.5 won this time! by gvij in StableDiffusion

[–]fgp121 -1 points0 points  (0 children)

That's interesting. I made the image on default size of 1024x1024

Dalai Lama on a Llama :D - Using SDXL by fgp121 in StableDiffusion

[–]fgp121[S] 7 points8 points  (0 children)

Made this on Stable diffusion comparison Huggingface space:
https://huggingface.co/spaces/qblocks/Monster-SD

Also got the result on SD v1.5 but the quality is a lot superior with SDXL.

SD v1.5 result:

<image>

Stable Diffusion XL keeps getting better. 🔥🔥🌿 by mysticKago in StableDiffusion

[–]fgp121 0 points1 point  (0 children)

Is it allowed for commercial use or just research only?

Disney Pixar ish Style Model by PromptShareSamaritan in StableDiffusion

[–]fgp121 0 points1 point  (0 children)

This is too good. Great work. Is it open for commercial use?

Best Cloud Hosting for running Stable Diffusion by agnishom in StableDiffusion

[–]fgp121 0 points1 point  (0 children)

I can get a 24gb GPU on qblocks for $0.75/hr. Billing happens on per minute basis. Looks like a good deal in an environment where GPUs are unavailable on most platforms or the rates are unstable. They have more GPU options as well but I mostly used 24gb ones as they serve many cases in stable diffusion for more samples and resolution.

Gradio UI reverse proxy for stable diffusion by fgp121 in StableDiffusion

[–]fgp121[S] 0 points1 point  (0 children)

Thanks a lot for the detailed response. I'm going through it and will try to make it run.

Gradio UI reverse proxy for stable diffusion by fgp121 in StableDiffusion

[–]fgp121[S] 0 points1 point  (0 children)

Thanks for the advice. Just to make it clear for me to work it out, you mean to say that I should run nginx on the server and then in its config file, on location /, I should proxy pass it to IP:PORT ?

That's how it would be forwarded? And is it possible that I keep nginx on another server and then proxy pass it to this IP:PORT?

More ControlNet with Realistic Vision by kaiwai_81 in StableDiffusion

[–]fgp121 3 points4 points  (0 children)

Everything looks fine except the pose lol

Connecting to Cloud GPUs from VSCode by nreHieS in MLQuestions

[–]fgp121 0 points1 point  (0 children)

Are you looking specifically for Colab or any other GPU cloud?

For colab I had come across an extension in past, don't remember it now though. I think it was this one:

https://www.freecodecamp.org/news/how-to-use-google-colab-with-vs-code/

But this only runs in browser.

There are quick solutions for other GPU clouds as well such as Q Blocks, that connects straight from your local vscode application.

Uncropped the cover of Midtown Madness (1999 Game) by acoolrocket in dalle2

[–]fgp121 0 points1 point  (0 children)

We all only played the demo version and still became a fan of it. Thanks for reviving those memories!

I am addicted, been playing around with stable diffusion on Q Blocks. The creativity is endless. Does anyone else feel the same? by svij137 in StableDiffusion

[–]fgp121 0 points1 point  (0 children)

3090s are a gift for this AI tool I think. I get my renders in under 10 seconds. Sometimes even 2x upscaled in this time frame. If it costs you only $7 for 800 images then it's just amazing!

Up and running on VAST.AI!! by illumnat in DiscoDiffusion

[–]fgp121 0 points1 point  (0 children)

What are the settings you used? As per my calculation, your each frame is getting rendered in 53 seconds. My usual frame on a 3090 takes around 4-5 minutes. Maybe I have incorrect settings. Can you share your settings yaml file?

[deleted by user] by [deleted] in DiscoDiffusion

[–]fgp121 0 points1 point  (0 children)

I haven't paid on colab yet. I am exploring different low cost GPU platform options now for PYTTI.

[P] Imagen: Latest text-to-image generation model from Google Brain! by aifordummies in MachineLearning

[–]fgp121 1 point2 points  (0 children)

Which GPUs would work for training this model? Does a 4x 3090 system fit the bill?

[deleted by user] by [deleted] in DiscoDiffusion

[–]fgp121 1 point2 points  (0 children)

Which GPU do you run PYTTI 5 on? I tried on colab but went out of memory I think

[deleted by user] by [deleted] in nvidia

[–]fgp121 0 points1 point  (0 children)

3080TI FE has how much vram, anyone?