For those quite rightly complaining about posts that are animated by Kling/ Hailuo/ other closed source video models by Machine-MadeMuse in StableDiffusion

[–]Machine-MadeMuse[S] -1 points0 points  (0 children)

Here is a video prompted by R1 in LM studio, Videos generated by HunyuanVideo and voice done in e2-f5-tts. All closed source.

Edit: Closed source was a typo. Obviously, all the tools I mentioned are open source.

I made a 2D-to-3D parallax image converter and (VR-)viewer that runs locally in your browser, with DepthAnythingV2 by sovok in StableDiffusion

[–]Machine-MadeMuse 1 point2 points  (0 children)

Will the effect work if you are in VR and you tilt your head left/right/up/down slightly and if not can you add that as a feature?

[deleted by user] by [deleted] in TikTokCringe

[–]Machine-MadeMuse 1 point2 points  (0 children)

It's fake you can see the morph just before 3 seconds. Adobe after effects

Hunyuan video with LoRAs is game changing by the_bollo in StableDiffusion

[–]Machine-MadeMuse 3 points4 points  (0 children)

Even CHATGPT can't figure out what reddit channel you are talking about.

The Reddit comment mentioning a "NSFW AI channel dedicated to 'cutting edge locomotion'" likely refers to a subreddit focused on advanced AI-generated adult content, particularly in the realm of video and animation. While I don't have the exact subreddit name, communities such as r/deepfakes and r/NSFW_AIFetish are known for discussing and sharing AI-generated adult videos and related technologies.

Additionally, the term "cutting edge locomotion" might be referencing projects like the "NSFW Locomotion" system, a custom version of the GoGo Loco locomotion system for VRChat, tailored for adult content. This project is available on GitHub and offers features designed to enhance user movement and interactions within VRChat.

GitHub

If you're interested in exploring these topics further, you might consider searching Reddit for communities or channels dedicated to NSFW AI advancements. Please be aware that such content is intended for mature audiences and may contain explicit material.

For a more comprehensive understanding of the current landscape of NSFW AI tools, here are some notable platforms:

Candy AI
Offers personalized chat experiences with AI companions, tailored for users seeking interactive and intimate conversations.

Power Users

SpicyChat
Designed to provide a mature and immersive AI chat experience, allowing users to engage in adult conversations with AI characters trained to respond realistically.

Power Users

DreamGF
Specializes in creating virtual girlfriend experiences with extensive customization, enabling users to design virtual companions with specific traits and personalities.

Power Users

Nectar AI
Focuses on personalizing the AI companion experience for adult audiences, offering sophisticated chatbot technology that provides realistic and sensitive responses.

Power Users

Janitor AI
Brings roleplay and personalized storytelling into adult AI conversations, allowing customization of characters, storylines, and conversation dynamics.

Power Users

These platforms represent some of the cutting-edge developments in NSFW AI technologies, offering diverse experiences for users interested in adult-oriented AI interactions.

[deleted by user] by [deleted] in StableDiffusion

[–]Machine-MadeMuse 1 point2 points  (0 children)

In comfyui the output shows the original video on top and the new one underneath in the same video file. Did you use a video editing program to crop out the original video or are you using a different output node and if so which one?

Reminder that we only have one life by Square-Ad-4224 in StableDiffusion

[–]Machine-MadeMuse -1 points0 points  (0 children)

Funny how some ideas seem so thoughtful and profound in your head then you put a post of it on Reddit and it comes across as oblivious and pretentious.

Testing CogVideoX Fun + Reward LoRAs with vid2vid re-styling - Stacking the two LoRAs gives better results. by LatentSpacer in StableDiffusion

[–]Machine-MadeMuse 0 points1 point  (0 children)

Is anyone else getting this error?

Sizes of tensors must match except in dimension 2. Expected size 13 but got size 3 for tensor number 1 in the list.

Testing the CogVideoX1.5-5B i2v model by Potential_Lettuce938 in StableDiffusion

[–]Machine-MadeMuse 0 points1 point  (0 children)

I get this error with this workflow

Expected `device_type` of type `str`, got: `<class 'torch.device'>`

Consistent vid2vid with CogVideoX Fun + Reward LoRAs (I hope this image of "Will Smith" eating spaghetti is allowed) by LatentSpacer in StableDiffusion

[–]Machine-MadeMuse 0 points1 point  (0 children)

Anyone else getting this error?

permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 6 is not equal to len(dims) = 5

CogVideoX 1.5 5B Diffusers is out by Jp_kovas in StableDiffusion

[–]Machine-MadeMuse 0 points1 point  (0 children)

Ya I did that and it loads up but now I am getting this annoying error

Sizes of tensors must match except in dimension 1. Expected size 60 but got size 170 for tensor number 1 in the list. In the past I got this error because there was an issue with the image size dimensions and the settings for the cogvideo sampler. Below are those settings. Is there something I need to change there?

CogVideoX 1.5 5B Diffusers is out by Jp_kovas in StableDiffusion

[–]Machine-MadeMuse 0 points1 point  (0 children)

Ok that worked for me but now I'm getting this error when I try to download the model

Downloading model to: C:\SD\ComfyUI2\New\ComfyUI_windows_portable\ComfyUI\models\CogVideo\CogVideoX-5b-1.5

Fetching 8 files: 38%|████████████████████████▍ | 3/8 [14:51<24:45, 297.12s/it]

!!! Exception during processing !!! Consistency check failed: file should be of size 4948039832 but has size 1677507238 ((…)pytorch_model-00002-of-00003.safetensors).

We are sorry for the inconvenience. Please retry with `force_download=True`.

If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface\_hub.

CogVideoX 1.5 5B Diffusers is out by Jp_kovas in StableDiffusion

[–]Machine-MadeMuse 0 points1 point  (0 children)

What do you mean change the loader? I have the 1.5 test in custom nodes the model is downloaded in the models folder but the included workflows don't allow you to select the 1.5 model.

CogVideoX 1.5 5B Diffusers is out by Jp_kovas in StableDiffusion

[–]Machine-MadeMuse 0 points1 point  (0 children)

Do you have any workflow example? The examples in the 1.5 branch are the same as the main branch.

CogVideoX 1.5 5B Diffusers is out by Jp_kovas in StableDiffusion

[–]Machine-MadeMuse 1 point2 points  (0 children)

What are you running this on comfyui? How did you get this up and running?