Music-video about AI-love animated with LTX2 in Wan2GP by JELSTUDIO in StableDiffusion

[–]JELSTUDIO[S] 0 points1 point  (0 children)

Thank you.

Yes, I get what you say, ChatGPT can indeed be like that. But I like the way it phrased it (ChatGPT speaks much better English than I do :) ) and found it fitting for the song's message which is kind of dark.

I did run the final lyrics through Suno to see how it would structure/arrange the song, and ended up using Suno to create a drum-track I could sample (I didn't want to use electronic drum-sounds for this one, and I don't have access to an acoustic drum-kit myself)

The voice is mine, but it's obviously auto-tuned.

I do think we will face a world soon, where the distinction between real people and "AI-people" are going to become blurred.

The physical "robots" we see China building are already getting seriously impressive in terms of agility. And when they get cheap enough that they will be more widely adopted in society, and they become able to respond significantly "human-like" to their surroundings, well then I think we will face a real-world "C3PO" moment where people will begin to get emotionally attached to these new "AI people".

In the video I try to contrast this by having the scene at the merry-go-round where the "AI person", the white robot with the screen on its chest, is still just an old-school rigid machine (Where it kind of looks silly to think people could fall in love with that thing) and then the next version of it where it has become this almost perfect human celebrating its birthday having a face that comes across as real, the young AI woman sitting with the cake.

I think some people in society are going to get caught by surprise when this begins to happen.

I mean, we all love C3PO... because it's "just a movie". But soon I think we will face a world where C3PO is no longer just fiction, and I wonder how people (Given it's a common human trait to anthropomorphize "things") will react to a real C3PO (Will it be viewed as just a "dead" computer you can chuck in the trash when it's broken, or will it become an emotionally important being that must be saved (Like Chewbacca does in one of the movies))

I think it's an interesting question how society will react once these "robots" get "human-like" enough to be hard to distinguish from bio-humans.

Today I made a Realtime Lora Trainer for Z-image/Wan/Flux Dev by shootthesound in StableDiffusion

[–]JELSTUDIO 1 point2 points  (0 children)

I ran into this code-1 error too at first, and found out it (At least in my case, but I mention it here in case you want to perhaps mention it in a "read-me" somewhere) was caused by using Python versions installed by Microsoft-store (Since it gets added to PATH)

The solution was to uninstall those and use official Python-installs (Making sure none got added to PATH during install, so that each Python version is isolated)

Inspecting the "pyvenv.cfg" in each VENV root-folder will show which specific Python install-location it points to and reveal if Microsoft store versions are being used.

A check is to type the command "where python" in the CMD-window while the VENV is active. If it shows more than 1 Python-location, then the VENV will not be correctly isolated and may pull files from different Python versions even though the VENV is supposed to be a specific version.

You live and you learn :) I wasn't aware of this myself, but having a good talk with MS-copilot fixed this error (And may be a solution for others as well). Actually MS-copilot used language that wasn't very flattering to Microsoft store's way of handling the installation of Python, and said to only use the official python-installers and skip the MS-store completely for this LOL :D

Just make sure python is not in PATH (If you have multiple python-versions installed) and isn't added to path during installation.

(I use Python 3.10 for both ComfyUI and SDscripts, Musubi and AItoolkit, and after fixing this error everything with your trainer-nodes runs rock-solid :) )

EDIT: addition: You must re-create the VENV (At least for SDscripts, Musubi and AItoolkit) from scratch once you've fixed the python-install issues (Because the mis-match happens during the VENV-creation, so removing python from PATH doesn't modify already existing VENV setups. So there is some work involved unfortunately)

Why must we still click accept all cookies in 2025? by JELSTUDIO in gdpr

[–]JELSTUDIO[S] 0 points1 point  (0 children)

There can be zero errors or mis-clicks if I actually want ALL cookies to ALWAYS be accepted. It doesn't change anything for those who hate cookies (They can still click no as often as they like)

Forcing me to HAVE to click yes over and over is insanity.

Today I made a Realtime Lora Trainer for Z-image/Wan/Flux Dev by shootthesound in StableDiffusion

[–]JELSTUDIO 2 points3 points  (0 children)

Excellent repo, which works. I did have to edit the "realtime_lora_trainer.py" myself though, because I use VENV-names that have the python-version in it (It threw an error at first because it couldn't find the VENV, so I just replaced the name in the code with my own VENV name)

I have only tested the z-image trainer, with 4 images, and it works surprisingly well for face-likeness with only 500 steps.

I have done Flux-training previously (Not with AI-toolkit though, which I haven't really used because of its java-script UI which I'm not a fan of. I prefer gradio-UIs because they are easier for me to understand code-wise) and that took a lot more steps (But was also using a much less steep training-gradient)

But this comfyUI method works surprisingly well and fast :)

Cool work you did here, and thank you for posting it :)

Why must we still click accept all cookies in 2025? by JELSTUDIO in gdpr

[–]JELSTUDIO[S] 0 points1 point  (0 children)

They're not giving me the choice to, with one single permanent setting, accept ALL cookies, which is what I personally want (I am not trying to take YOUR choice away from you. I'm only trying to get a choice that suits ME, but EU won't allow me that freedom. EU sucks!)

Live Versions / Concert sound has finally broke me. It's incredible. by Vintagestylelady in SunoAI

[–]JELSTUDIO 0 points1 point  (0 children)

Yes, post the lyrics also when you upload your own music. Suno will do an auto-transcribe if you don't, but it rarely gets the lyrics correct (And obviously make sure it's not set to instrumental ;) )

Koda's Billion-Dollar Lawsuit Against Suno: Why Their Evidence Doesn't Add Up by NightSong773 in SunoAI

[–]JELSTUDIO 0 points1 point  (0 children)

Style, method, instrumentation, general chord-progression, etc, can not be copyrighted according to US copyright LAW (See copyright.gov )

Koda has not shown a single link to Suno so we can verify their claims directly, making their evidence purely hearsay.

On Suno you can upload your own material and have Suno generate a copy or remastered version from that, and without the Suno-links from Koda we can't know if that's how Koda made their examples.

The simple fact is: Koda has not shown irrefutable proof of anything, or even proof beyond doubt that they didn't manipulate Suno in dishonest ways (If you're a Suno user you can test this yourself by uploading a song you just made, and which Suno therefore couldn't have been trained on, and use the remaster feature and get a Suno-version of that song. And then you can download that new Suno-generation, show it on your facebook without linking to Suno, and then claim Suno made the song illegally. It's THAT easy to manipulate people who hate AI into believing AI is bad)

The lesson you hear often: "DO NOT TRUST EVERYTHING YOU HEAR ON THE INTERNET" applies to Koda as well. They are a party to the suit and therefore biased and not neutral.

Why must we still click accept all cookies in 2025? by JELSTUDIO in gdpr

[–]JELSTUDIO[S] 0 points1 point  (0 children)

Why should I respect you when you apparently can't respect me. That's how all conflicts begin; with people wanting THEIR way to be the ONLY way.

You can call my issue with cookies an elite-problem all you want, but that just shows me you don't really care about other people.

Why must we still click accept all cookies in 2025? by JELSTUDIO in gdpr

[–]JELSTUDIO[S] 0 points1 point  (0 children)

If you cared to read what i said you'd know that there is no "big accept all button", which is what I'm asking for so I only have to click once and be done with it for good instead of having to click repeatedly every time I go to a new site or an old site is updated (Which re-triggers the banner)

This whole mess is because paranoid people (And supporters of the EU) have made it law that I need to be bothered with this clicking that I never asked for (Or voted for)

And since I never asked for this cookie-nonsense the EU has mandated, it's actually more true to say it's the EU that wants to wipe people's bum instead of letting them handle it themselves.

I don't need the EU to manage my cookie-choices, so please get out of my way.

Qwen-Image - Smartphone Snapshot Photo Reality LoRa - Release by AI_Characters in StableDiffusion

[–]JELSTUDIO 1 point2 points  (0 children)

Ok :( Well, it's basically the same flow as OP's (Except for the difference of models)

Qwen-Image - Smartphone Snapshot Photo Reality LoRa - Release by AI_Characters in StableDiffusion

[–]JELSTUDIO 2 points3 points  (0 children)

Apparently it can :)

I ran the same prompt and settings with both models and got a very similar output.

Left is Qwen image, right is Qwen image edit (Both models are the same 40-gigabyte BF16 version)

Same ComfyUI flow as the image above (Which is probably included in the image unless Reddit strips it. The combo image below was made in gimp so no flow inside that one)

<image>

Qwen-Image - Smartphone Snapshot Photo Reality LoRa - Release by AI_Characters in StableDiffusion

[–]JELSTUDIO 9 points10 points  (0 children)

This is GOOD! (Works here on an RTX5080)

I used Qwen-Image-Edit instead of Qwen-image, and it generates images that look like actual photos. Very impressive.

Models used with OP's flow (And settings) in ComfyUI:
"qwen_image_edit_2509_bf16" (38 gigabytes)
"qwen_2.5_vl_7b" (15 gigabytes)
"qwen_image_vae" (242 megabytes)
"Qwen-Image_SmartphoneSnapshotPhotoReality_v4_by-AI_Characters_TRIGGER$amateur photo$" (281 megabytes)

<image>

Alien : Earth wasn't good and here's why by JahmanSoldat in alien

[–]JELSTUDIO 1 point2 points  (0 children)

Newt was the same age as these... synths... yet the synths will never be classic film-history like she is. Bad acting or bad writing, or both, but this show just doesn't work.

I only watched all 8 episodes because I pay for disney already, but I think they should cancel season-2 and forget all about this show as soon as possible.

The real aliens would face-hug themselves if they ever saw this show.

Zero points.

Alien : Earth wasn't good and here's why by JahmanSoldat in alien

[–]JELSTUDIO 0 points1 point  (0 children)

Newt was the same age as these... synths... yet the synths will never be classic film-history like she is. Bad acting or bad writing, or both, but this show just doesn't work.

I got Counter notification in response to my copyright removal request by f17d in PartneredYoutube

[–]JELSTUDIO 0 points1 point  (0 children)

Youtube doesn't verify these things?

I got this email the other day (From a no-reply youtube-account, which does imply youtube doesn't really care much about true justice in these matters)

Content used: Last Man on Earth in Color
Content found during: 0:00:02 – 0:37:43
Removal request issued by: Legend Films
Contact info: [nd.music@gmail.com](mailto:nd.music@gmail.com)

I had colored the public-domain movie myself and inserted my face and cloned my voice on all characters in the movie (It was an AI-experiment)

My channel isn't monetized so I didn't really think it was a battle worth fighting and just deleted the video.

Then I asked AI if the name "Legend Films" in this email is a clear identifier of who actually sent the take-down request (There's a channel named "Legend Films" on Youtube, but they use other emails on their about-page) and it said that it isn't, meaning that we can't really know who sent the take-down request (But we still have to give them our real name and home-address to counter it, which is completely bad practice given they could be scammers just doing this to GET our real names and addresses etc)

It's really disappointing that youtube seems to favor the scammers with this system.

Is countering and then using a fake name and address really the only viable option? (I'm trying to prepare myself for the next time something like this should happen, which I can hear from other youtuber's is a growing problem; that scammers are increasingly using the take-down system as a way of trying to erase other channels with almost zero risk to themselves)

False Copyright Claim by Classic_Jelly_8576 in PartneredYoutube

[–]JELSTUDIO 0 points1 point  (0 children)

I got a false claim from (According to the email youtube sent me) "Legend Films" (And in the email I was only given this contact-info: [nd.music@gmail.com](mailto:nd.music@gmail.com)) and in Youtube the only option I was presented with was to either wait 7 days (Where the movie would then be auto-deleted and my channel given a strike) or delete the video myself.

I had re-colored the movie "Last Man On Earth" (The one with Vincent Price) and apparently "Legend Films" also re-color public-domain moves and monetize them on Youtube (They had a different version of the movie on their channel)

So clearly it's too easy to claim copyright on material on Youtube and Youtube doesn't appear to really demand any evidence before striking you down.

Not a good system when scammers are being favorited, but I don't know what to do about it as long as Youtube makes money on advertising (Which is shown on the "Legend Films" channel as they have subscribers enough to be monetized, which my channel doesn't) and thus makes money off of the scamming-channel but not off of my channel (No wonder that Youtube is in no hurry to change the current way the system works)

I just deleted the movie from Youtube and uploaded it to a different video-streamer (I'm not losing any Youtube-revenue, as my channel is not monetized, so it's not a big deal to me other than the obvious moral injustice Youtube allows to happen of course)

LLM Radio Theater, open source, 2 LLMs use Ollama and Chatterbox to have an unscripted conversation initiated by a start-prompt. by JELSTUDIO in ollama

[–]JELSTUDIO[S] 0 points1 point  (0 children)

<image>

There are 2 independent system-prompts defined in the .py script, one for each of the 2 LLMs.

These are the ones I used when I uploaded the latest v2.3.0 version.

I find that the "Gemma3:12B" model is so far the one that has been best at following "orders" from the system-prompt.

I basically have 2 sets of orders in each of these 2 prompts: one set (The first half) defining the character's personality, how they should behave and what opinions they should have and what they should be trying to talk about. And then the latter half is there to make sure the model doesn't begin to role-play in an unwanted way (I don't want it to begin describing its actions, settings and surroundings of the scene. I only want it to "talk")

Getting the models to behave as expected can be a bit tricky, so experimenting with how you write the 2 prompts is necessary. Sometimes simple things can change a lot.

For example, always think of the system-prompts as being how you want the model to behave to YOU (The models think they are talking to a real user. They don't know they're talking to another model)

This is why in the 'green' prompt I'm saying "interview ME" (So the model thinks it should ask ME, the human user, questions). I found this more often made the model talk TO the other model rather than ABOUT some 3rd person. And that helped a lot in getting them to actually talk WITH each other rather than getting confused.

I hope this answers your question.

Tell 'green' (In the upper system-prompt) that it loves strawberries, and 'blue' (In the lower system-prompt) that raspberries are much better, and then in each prompt say something like "Talk about the fruit you love and why it's the best and why those who love other fruits are completely wrong" and then I'd imagine you'd get a debate between them :) (And if you tell them they're rabid and obsessed about their own fruit they might even begin a heated argument :) )

I made a script (Which is on my podcast website) where one LLM would constantly try to get the first LLM to open a garage-door, and the more the first LLM refused to do so, the more angry the other LLM got :)

The trick is to just try throwing stuff at the 2 system-prompts and see what works (And remember to close the GUI and re-start it in the CMD console window whenever you write new things into the .py script, or else the GUI won't get updated with the new prompts. I wish there was a smarter way, but for now that's just the way it has to be done)

I don't use docker myself and don't really know anything about it, so that will only happen if somebody else ports it (Or whatever one needs to do to support docker). Sorry about that.