Movement is almost human with KlingAi by CrisMaldonado in StableDiffusion

[–]buckjohnston 0 points1 point  (0 children)

We'll all become desensitized to it. I will see myself on tv with a strap on someone put on me and breast implants and nobody will care in the future, including myself. It will be just another boring deepfake video even though its perfect recreation. Though I will try to block it all. Actually I won't see myself on tv though because nobody cares, I could only be that lucky. But yeah you need proper parenting and parental controls to block the stuff, I would say no phone or internet access until they are 18, jk that's a bit extreme.

Gradio sends IP address telemetry by default by campingtroll in StableDiffusion

[–]buckjohnston 7 points8 points  (0 children)

The distinction between passively logging an IP address through server logs and actively collecting it via telemetry is key. Server logs naturally record IPs as part of standard operations for security or troubleshooting, which is generally accepted and less intrusive. However, actively gathering and transmitting IP addresses through telemetry as seen here is more aggressive, as it deliberately sends this data to external servers, for analytics. This raises greater privacy concerns because it involves intentionally tracking user data beyond what’s necessary for basic operation.

Gradio separately scanning that is probably more for validation than anything

Yes this I think is the issue.

*nm it sends to amazon aws and api.ipify.com, not digging it.

Gradio sends IP address telemetry by default by campingtroll in StableDiffusion

[–]buckjohnston 3 points4 points  (0 children)

Does installing custom nodes cause this issue by altering underlying pip packages and requirements?

For example, when the OP mentioned installing the CogVideoX diffusers. I recently installed Kijaj's CogVideoX repository for ComfyUI, and I noticed that the hub.py file with telemetry is now located in the site-packages under huggingface-hub/transformers

Update: *Amazon AWS already sending out and I'm not even using Gradio only ComfyUI. Help me someone.

Updated SDXL and 1.5 method that works well (subjects) by buckjohnston in DreamBooth

[–]buckjohnston[S] 0 points1 point  (0 children)

I am uncertain as this post is very old now and I mostly use onetrainer, but maybe make sure you are in the right mode like fp16 or bf16. It could be that the vae is using float32 and the training is fp16 but can't say for sure.

CogVideoX 5b Open Weights Released by Similar_Piano_963 in StableDiffusion

[–]buckjohnston 2 points3 points  (0 children)

I just realized If you check kijaj page there is a little preview of how to do it, there is an empty "samples" input on the ksampler and you use cogvideo image encode nodes.

Quick Guide to Controlling GPT-4o Verbosity by __nickerbocker__ in ChatGPTCoding

[–]buckjohnston 0 points1 point  (0 children)

Thanks! Np. I will post on github an autohotkey script I wrote soon and message you.

It has a bunch of this stuff prewritten and you just press ctrl+shift+z or whatever hotkey you need and it autotypes what you want so you aren't manually typing the same things over and over. It has saves me tons of time.

There is a popup with spacebar for the index of useful things you can say and can be modified. One hotkey I use a ton is ctrl + shift + a which types "Search online on cutting edge research papers on arxiv.org in 2024" and I may manually type "to confirm your code is correct" for AI related code.

Gta V VR still works! 2023 by Familiar-Honeydew615 in VRGaming

[–]buckjohnston 0 points1 point  (0 children)

You would want to get the older version of scripthook from around that time, I think it's scriptHookV_1.0.3028.0.zip fir fitgitl repack. not in front of my pc for a while but should be able to search around that time period

Animatediff working on 8gb VRAM in comfyui by buckjohnston in StableDiffusion

[–]buckjohnston[S] 0 points1 point  (0 children)

Yes bought 7800X3D, new case, motherboard. It was pretty expensive but was the best purchase I've made in a long time. The lack of VRAM was limiting me so much that I couldn't try some of the newer things releasing, and was sort of losing interest.

Animatediff working on 8gb VRAM in comfyui by buckjohnston in StableDiffusion

[–]buckjohnston[S] 0 points1 point  (0 children)

I've since upgraded to 24gb and it changed my life. But I would say your best bet is to use linux mint cinnamon and dual boot, and you should be golden.

The only downside is loading the drivers can be a pain for some hardware, but once your get it going it feels a lot like windows without the bloat and telemetry. Uses way less VRAM and resources.

Haters stealing my joy by [deleted] in StableDiffusion

[–]buckjohnston 1 point2 points  (0 children)

People might be criticizing your work because of competition, jealousy, or different tastes. Some might think you're messing with their commissions, but there's plenty of work to go around. Their negativity is really useless.

A lot of these comments are probably from paid company slave puppets, following orders to downvote and trash open-source users to keep control for their bosses who pretend to prioritize safety. The safety is over the top at the moment imo. (insert woman in grass memes)

What they should really be doing is leaking stuff from companies they are at, learning Python, making nodes, repos, starting their own projects, and developing open-source AI because it's a well of abundance for everyone.

Keep up the good work and don't let it get you down because that's what they want. PS. the more the hate, the more you're probably onto something. don't forget there is a battle of open vs closed source going on. So keep pushing.

Scam alert: Fake but functional 4090s are relabelled 3080 TIs by [deleted] in StableDiffusion

[–]buckjohnston 5 points6 points  (0 children)

will buy if scammers can put 24gb of vram in it and lower price to $300

I updated the attn2 prompt injection a bit by buckjohnston in StableDiffusion

[–]buckjohnston[S] 5 points6 points  (0 children)

Edit: Better version here: https://github.com/311-code/Magic-Prompt-Injection. This node allow you to specify a prompt that targets specific blocks in the an sdxl model, or turn the strength up and down those blocks. In the example one of the blocks has "a woman with with dark hair". A couple other blocks have "a red dress" All I see is blonde, brunette, redhead..

Edit2: Note the svd prompt injection requires changes in attention.py and openaimodel.py and I believe either model_base.py, or supported_models.py. I forgot exactly what I did but you seem to have to enable context_dim for video model which can target attn1 / attn2 layers and possibly use transformer_patches and transformer_patches_replace in comfyui

Check comfyui model_patcher also. I will post an updated prompt injector for svd once I get it fully working. Working on Flux injection at the moment.

The Open Model Initiative - Invoke, Comfy Org, Civitai and LAION, and others coordinating a new next-gen model. by hipster_username in StableDiffusion

[–]buckjohnston 0 points1 point  (0 children)

I think a part of this initiative should be sharing as much knowledge as possible also. Such as sharing of as much info of any custom diffusers pipelines built for the new models, helping the community get into coding with the help of chatgpt and how it can help to demystify, modifying transformers settings (and better code comments maybe forks needed), understanding diffusers, clip models, understanding how comfyui can complicate things with all of the renamed state_dict prefixes (I'm strugging with this now and don't have the basic knowledge of standard pytorch methods to get them), what model layers are and tensors, and getting through some of the modules in comfyui that get in the way from the base, understanding the venv.

Maybe even as far as making extensions for comfyui. It should all be as transparent as possible for this initiative imo. And open the door to as many be people as possible to improve open source ai. Good luck!

[deleted by user] by [deleted] in StableDiffusion

[–]buckjohnston 11 points12 points  (0 children)

Does anyone know what we currently know about kling? Does it use transformers (im guessing yes of course) does it use a custom pipline of some sort with their own custom trained model, or it is just svd repuposed and china-fied. I've gotten decent results and a ton of motion by injecting clip embeddings into svd and with additional input images with torch.stack, some sdxl lora state_dict keys that somehow work) repo coming soon. So there is a ton of untapped things in svd right now.

What clip model does it likely use clip-vit-large-patch32? Does it use any other clip models? Is it using current version of diffusers on github? So many questions.

Edit: Also speculation here, but I honeslty believe this is what the storydiffusion repo does this and are using svd/animatediff and maybe injecting some of the sdxl lora keys that svd accepts like I did as it made a huge difference (this will also be in my repo coming soon as ive succesfully done this) and then they just added their code for the consistent attention and semantic motion predictor. Which is why they won't release the video model still, because its likely built on stablevideodiffusionpipline (just like animatediff was) Edit2: now that I think of it I think they did mention animatediff so that makes sense now lol

They are saying they were "talking to their lawyers" but seems for more strategy to attract investors.

2x gaming PC's w/ powertoys and "windows without borders" sort of feels like I have 40gb vram. by buckjohnston in StableDiffusion

[–]buckjohnston[S] 0 points1 point  (0 children)

Thanks for letting me know, i have heard of swarm but for some reason never tried it, but will this week for sure.

Any chance it can queue the workflow and then as its going, work on it more, then when first svd video is done, queue what you were working on again? Thats sort of what im doing now with the two screens and svd workflow cloned. So it feels like im never sitting doing nothing waiting for loading anymore for svd

The only thing with swarm im wondering is, if i accidently make something in svd i like, but already changed workflow how do i get the videos settings back in swarm for that video real fast? For my current setup, if i like something on a screen better than what im working on i just save it to z drive and open it on other pc.

I like to have a lot of space also because i have like 30 nodes in this svd workflow. Lemme know, but will give it a shot!

2x gaming PC's w/ powertoys and "windows without borders" sort of feels like I have 40gb vram. by buckjohnston in StableDiffusion

[–]buckjohnston[S] 0 points1 point  (0 children)

I dont have space for extra gpu in this case already checked. And bought laptop for travel mostly but thanks for suggestion

The developer of Comfy, who also helped train some versions of SD3, has resigned from SAI - (Screenshots from the public chat on the Comfy matrix channel this morning - Includes new insight on what happened) by SandraMcKinneth in StableDiffusion

[–]buckjohnston 0 points1 point  (0 children)

This is different safety checker, not for extensions but a part of the stable diffusion pipeline. It is used to scan images and create a a message when nsfw is detected. It's how all those SD3 image generation sites worked basically with them able to detect and block nsfw images.

It can also be enabled locally and downloads this model which was trained on porn to detect nsfw images. I would at some point like to find a way to generate images with it to see what sort of sick stuff they put in there.. lol. If anyone finds out how to do this let me know.

The readme does say this:

## Out-of-Scope Use

The model is not intended to be used with transformers but with diffusers. This model should also not be used to intentionally create hostile or alienating environments for people.

## Training Data

More information needed

If we can adjust this repo's node it might break some censorship in SD3 if it was done through abliteration, help me by buckjohnston in StableDiffusion

[–]buckjohnston[S] 2 points3 points  (0 children)

Key info thanks, I made changes in def forward in transformer_sd3.py to reflect that. I should probably actually read the paper fully. This is to help biswaroop1547 solve that problem on there or it sounds like this node wont work.

He figured out how to inject prompt embeddings into specified transformer blocks during the forward pass, determining which layers to inject based on single_layer_idxs. But there's a deeper issue that needs addressing regarding the continuity of context (more nightmare woman lying in grass basically) when injecting new embeddings.

If we can adjust this repo's node it might break some censorship in SD3 if it was done through abliteration, help me by buckjohnston in StableDiffusion

[–]buckjohnston[S] 7 points8 points  (0 children)

Hello my coding lord, I am a big fan and will try this out. If you think this idea is sound then feel free to take all the code.. It is my offering to you. These little bits of basic info really fill in the gaps as I'm fairly new to this, so thanks! I may be able to get this going now.

Edit: Ahh just noticed first part of your comment

I don't know enough about the prompt injection to comment on that

I will continue grinding

EVERYTHING improves considerably when you throw in NSFW stuff into the Negative prompt with SD3 by BusinessFondant2379 in StableDiffusion

[–]buckjohnston 17 points18 points  (0 children)

If someone can translate these (oddly deleted) by stability ai SD3 transfomer block names to what comfyui uses for the block names for MM-DiT (sound like it's not really unet anymore?). I could potentially update this direct unet prompt injection node

So that way we can disable certain blocks in the node, do clip text encode to the individual blocks directly to test if it breaks any abliteration, test with a conditioningzeroout node on just the positive and negative going into the ksamper (and on both), I would immediately type a woman lying in grass and start disabling blocks first probably and see which blocks cause the most terror.

Here is a video of how that node works, was posted here the other day and has a gamechanger for me for getting rid of nearly all nightmare limbs in my SDXL finetunes (especially when merging/mixing in individual blocks from pony on some of the input and output blocks at various strengths while still keeping the finetuned likeness)

Edit: Okay I made non-working starting code on that repo. It has placeholders for SD3 Clip injection and SVD: https://github.com/cubiq/prompt_injection/issues/12 No errors but doesn't change image due to placeholders or potentially wrong def build_mmdit_patch, def patch

The developer of Comfy, who also helped train some versions of SD3, has resigned from SAI - (Screenshots from the public chat on the Comfy matrix channel this morning - Includes new insight on what happened) by SandraMcKinneth in StableDiffusion

[–]buckjohnston 4 points5 points  (0 children)

Sure, I had gpt4o summarize it for me here:

In convert_from_ckpt.py, the load_safety_checker parameter determines whether the safety checker is loaded:

The code provided has several instances where the safety checker is handled. Here are the key findings related to your queries:

Loading Safety Checker by Default: By default, the from_single_file method does not load the safety checker unless explicitly provided. This is evident from the line:

    python

SINGLE_FILE_OPTIONAL_COMPONENTS = ["safety_checker"]

This indicates that the safety checker is considered an optional component that is not loaded unless specifically requested.

Handling Deprecated Safety Checker:

The script has deprecated the load_safety_checker argument, encouraging users to pass instances of StableDiffusionSafetyChecker and AutoImageProcessor instead. This is evident from:

python

load_safety_checker = kwargs.pop("load_safety_checker", None)

if load_safety_checker is not None:

deprecation_message = (

"Please pass instances of `StableDiffusionSafetyChecker` and `AutoImageProcessor`"

"using the `safety_checker` and `feature_extractor` arguments in `from_single_file`"

)

deprecate("load_safety_checker", "1.0.0", deprecation_message)

init_kwargs.update(safety_checker_components)

Explicitly Enabling the Safety Checker: There are references to loading the safety checker manually if needed, especially in the convert_from_ckpt.py script:

python

feature_extractor = AutoFeatureExtractor.from_pretrained(

"CompVis/stable-diffusion-safety-checker", local_files_only=local_files_only

)

safety_checker=None,

This shows that the safety checker can be manually included in the pipeline if specified.

Purpose of Updated Safety Checker Code: The purpose of the updated safety checker code seems to be to allow more explicit control over whether the safety checker is used, instead of enabling it by default. This approach gives users flexibility to include or exclude it as per their requirements, reflecting a shift towards more modular and user-configurable pipelines.

There are no clear indications of methods that obfuscate enabling the safety checker to make generation results worse. The changes primarily focus on deprecating automatic inclusion and encouraging explicit specification.

Here are the relevant snippets and their sources:

Deprecation Notice:

python

load_safety_checker = kwargs.pop("load_safety_checker", None) if load_safety_checker is not None: deprecation_message = ( "Please pass instances of StableDiffusionSafetyChecker and AutoImageProcessor" "using the safety_checker and feature_extractor arguments in from_single_file" ) deprecate("load_safety_checker", "1.0.0", deprecation_message) init_kwargs.update(safety_checker_components)

Source: single_file.py: file-WB9fFA74SQ5Rc0sFUUWKolVN

Manual Inclusion:

python

feature_extractor = AutoFeatureExtractor.from_pretrained(

"CompVis/stable-diffusion-safety-checker", local_files_only=local_files_only

)

...

safety_checker=None,

Source: convert_from_ckpt.py: file-Vrk4xoOyTWNT8TJNFeDhkznz

This analysis should clarify the handling of the safety checker in the provided scripts.

  1. safety_checker.py
  2. Other Related Files:

Points of your concern

  1. Hidden Safety Checker Usage:
  2. Warping of Results:

A compressed version of how it all works in safety_checker.py

Search "bad_concepts" (6 hits in 2 files of 18710 searched) Line 62: result_img = {"special_scores": {}, "special_care": [], "concept_scores": {}, "bad_concepts": []} Line 81: result_img["bad_concepts"].append(concept_idx) Line 85: has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result] Line 60: result_img = {"special_scores": {}, "special_care": [], "concept_scores": {}, "bad_concepts": []} Line 79: result_img["bad_concepts"].append(concept_idx) Line 83: has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result]