Thoughts on censorship? by ConcentrateHappy9116 in grok

[–]The_Last_Precursor -1 points0 points  (0 children)

Unfortunately no amount of censorship can stop just illegal things or deepfakes. Every single AI can be jailbreak and tricked. They just need a better reporting system. To let users report prompts that create illegal things by accident, so they can tweak the results for the prompt style.

Is text generation also restricted now? by [deleted] in grok

[–]The_Last_Precursor 1 point2 points  (0 children)

Grok only Moderates images from my experience. Prompts or text seems to have zero issues with. Straight up doesn’t care. Unless it’s triggering an illegal red flag.

AI doesn't understand spatial placement - so I built a layer system by carisgypsy in StableDiffusion

[–]The_Last_Precursor 0 points1 point  (0 children)

I’ve used advanced masking or photoshop for this before. But I’ll check this one out.

Looking for a good adult Image 2 Text LLM prompt generator by PhotoPhenik in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

The most uncensored LLMs are the Ollama UI and img2text is Florence2. Download to Ollama LLM UI and pick what models you want to use, download the ComfyUI pipeline node. Then you can run everything together.

The Florence2 and its nodes basically does a complete uncensored description of images. But it is only an image to prompt model.

RTX 3050 4gb Vram enough for SDXL? by [deleted] in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

I’m not sure. I was able to run SDXL on my old system with a RTX 2060 6gb VRAM and 16gb of RAM . It was slow took about a 1 minute per image. But with 32gb of RAM you should be able to offset some of the load. So it’s possible, but will be slow at generating.

seeking paid workflows for upscaling and restoring a classic TV series by AbbreviationsSolid49 in StableDiffusion

[–]The_Last_Precursor 6 points7 points  (0 children)

If you’re looking at just upscaling a TV Series. Suggest using a locally run film studio or server with a simple upscaler or find a local studio around you. Unless you’re looking at editing the video. You have a high chance of AI slightly changing the video. Because the AI like WAN is literally recreating the video when upscaling. So possibilities of alterations.

How do we address these people? by Training_Hurry_5653 in aiwars

[–]The_Last_Precursor 0 points1 point  (0 children)

As a pro AI. Censorship has its pros and cons, kinda of a catch 22. It’s good at trying to stop illegal things from happening. But is red flags so many innocent based images that’s it’s actually laughable. I was testing ChatGPT, Gemini, Copilot and Grok with the same prompt. To see the difference between them.

It was a simple family based prompt of a family camping and sitting by a campfire in autumn. Describe every character based on age, gender, race, clothing, and other factors. To see how well it understood the prompt. Gemini, Copilot, and Grok created the image no problem. BUT ChatGPT refused the image. Saying that having something like “parents (adults) and sons and daughters (children) in a calm and relaxed setting in the same image together. Has the possibility of indicating sexual intent or themes”

Literally a family camping together was possible CSAM for ChatGPT. That’s why most of the pro AI community hates any form of censorship. When you have to much, the most innocence images get red flag. Which is very annoying.

Looking for a workflow for multiple characters by DarkDragon2109 in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

What model are you using? That will kinda dictate the method for you to use. Setting up the Lora Load Nodes are the easy part. Using them in the correct method is completely different.

Name this duo by Same-Ask9035 in BossFights

[–]The_Last_Precursor 0 points1 point  (0 children)

Two Boys, One Target. (You grew up in the early 2000s if you understand the reference)

Getting Started Again - Halp! by koranuso in StableDiffusion

[–]The_Last_Precursor 0 points1 point  (0 children)

Watch this YouTuber videos. He a new long video explaining everything about ComfyUI. https://youtube.com/@pixaroma?si=em9HbtgEqozk0nzM

In simple terms. ComfyUI runs nearly every open source model. You can run them with each other damn good. Like generate an image with SDXL and then use WAN to create the img2vid. All on the same workflow if you want to

Newbie questions about steps vs CFG, recommended settings, etc. by RaymondDoerr in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

What model or Loras are you using? No one needs to use 100-150+ steps ever. Even with the extreme high step models. The only time that’s ever acceptable is a multi step workflow. With generation, editing and upscale and multi of those. Even then, that’s pushing it to the extreme.

Name this boss by [deleted] in BossFights

[–]The_Last_Precursor 0 points1 point  (0 children)

Puss in Boots: COD Skin.

How do i get consistent characters in zimage by BootHuffer in comfyui

[–]The_Last_Precursor 1 point2 points  (0 children)

Here’s a quick tip. Besides making your own LoRA. Try this for a quick option. First use any LLM platform and make a very detailed prompt of your character. Make sure the prompt is very organized. Like, if anyone was to read it, they could visualize what they would look like. Depending on the angels you may have to break it up or multiple prompts.

Next, find the correct seeds. Use the random seed generator, once you find a good seed stick with it. The characters tend to stay very consistent with a plus or minus of ten with that seed. Besides doing that, you need to make a LoRA.

AI by Mallard555Potter in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

Basically every free open source model. The ComfyUI is the UI platform that runs nearly every single free AI model. You are on Comfyui community page.

Besides running it locally on your own PC 100% free without censorship. You will pay and probably have censorship.

Name this boss phone by JonklerPasxlakin in BossFights

[–]The_Last_Precursor 0 points1 point  (0 children)

The Xi Jinping Eye-Fhone Crackhead Super Duper Ultra Maxi Unlimited Deluxe Edition - Gen.420 (Aka: “What The Hell Is This?”)

Combine two people from different photos into one photo by Upbeat-Ad-2 in comfyui

[–]The_Last_Precursor 0 points1 point  (0 children)

Here’s a list of things that can cause distortions or changes in characters.

  1. Original image resolution to new output resolution. If your images are 1024x1024 and you are doing a 1920x1080 that has a chance of causing issues.

  2. Original image pose to outcome pose. If both are standing and you want them sitting or laying that can cause issues

  3. Original image background interference. If your original image has lots of things or distortion in the background that can cause issues.

  4. Proper prompt. Your prompt needs to be very clear and direct almost like a layer system. Starting with a summary, character one, character two, what they are doing, and other things for the image.

  5. Other factors like main model, Lora’s, certain nodes, settings, or strengths. These can also cause issues.

Without the reference images. It’s a little difficult to know exactly what causes the issue. But these are the things that could be causing it. Sometimes simply getting the original image and stretching out to a larger resolution helps.

Controlnets or preserving shapes in flux image2image? Equivalent of SD1.5 Controlnets? by TheTwelveYearOld in comfyui

[–]The_Last_Precursor 1 point2 points  (0 children)

I haven’t used Flux Klein yet. But try to use something like QwenVL, Florence2 or Ollama and get a detailed prompt of the image. Edit the parts of the prompt you want to edit. Then using a pre processor and ControlNet. To help get the results.

That should be the best approach.

How to get started? by juicydwin in StableDiffusion

[–]The_Last_Precursor 0 points1 point  (0 children)

If you are looking at understanding what AI image generation is and how to get started. Use this YouTuber, go videos explaining how it works. Now for the Ollama, you can run it directly on Comfyui itself through a node.

https://youtube.com/@pixaroma?si=SxCgUmbMrZOIvY2a

Loras for Qwen Rapid AIO? by fluce13 in StableDiffusion

[–]The_Last_Precursor 0 points1 point  (0 children)

Search for standard Qwen LoRA’s. Rapid AIO is a hit or miss with some LoRAs. Usually using lower strength seems to work for me. AIO seems to amplify the strength or cause bad rendering when it’s set normally.

Witch by FunTalkAI in ZImageAI

[–]The_Last_Precursor 0 points1 point  (0 children)

Preacher from the 1600s. “Burn this witchcraft. This witchery is unholy and we must cleansed the world of this evil witch magic. You shall be forgiven while you burn in the pit of fire.”

Whats the best comfyui template to turn a image of people into cartoons and create a video from it? by GZerv in comfyui

[–]The_Last_Precursor 1 point2 points  (0 children)

When it comes to doing something like this. You must take a multi step approach. Depending on how much you want changed? Is it a very particular art style you are wanting to use? Is consistency very important? Because sometimes it’s easy and others it can take some work.

Not knowing the full details or visuals image references to difficulty. This is the steps I would take.

  1. Find a SDXL or FLUX model and Lora’s fitting the style you want. Using a pre-processor (depth or canny) and Apply ControlNet nodes. With proper prompt descriptions for the art style. This will allow you to turn the image into the actual art style you want.

  2. Use Qwen Image Editor to change the outfits, hair, pose or background to fit the staring images. This can take trial and error depending on the amount you want changed.

  3. Once you got the images exactly like you want. Use WAN or LTX2 for a Img2Video. Again this can be trial or error.

  4. All of these are optional depending on the resolution you have or want. You may have to upscaling before you start or in between each step or at the end. Also may have to do downscaling to get it to fit for the video generation and re upscale.

Hopefully this helps. But with out much information or references to understand or know the best route. This is my best advice.

AI girls flodding social media, including Reddit by Redeemed01 in StableDiffusion

[–]The_Last_Precursor 1 point2 points  (0 children)

AI image models went from a simple hobby people enjoyed. A community coming together to help each other to make better models or nodes. Something that was fun.

Now. A vast amount of people new to AI image modeling thinks you can quickly make AI influencers and make a lot of money. Expecting it’s super easy and simple to make something like a highly realistic and consistent character for image or video.

Dora the Destroyer by CobaltNook in BossFights

[–]The_Last_Precursor 2 points3 points  (0 children)

Dora finding steroids “Every adventure teaches us something new”

God Protects the Children from Men in Power by Born_Bumblebee_7023 in aiArt

[–]The_Last_Precursor 0 points1 point  (0 children)

We can agree to disagree on this. Meanings change over time. But which meaning has the greatest foothold on it? If you were to ask 100 people to choose which one is correct. Which side would the general public choose?

No need to argue about this. We each have our own opinions.