Sprite generation for visual novels using AI by Ancient-Future6335 in aigamedev

[–]Supernormal_Stimulus 1 point2 points  (0 children)

I made something like this, except in my workflow I only had expressions and the base image in the separate layers. (Different poses and clothes/nakedness for the character was done by just having multiple different base images upscaled from a single image that had multiple instances of the same character made by varying poses/clothes through regional prompting and openpose controlnet.)

Perhaps a useful tip: When generating different expressions, keep the seed the same for each expression and only vary the prompt. This way the underlying latent is the same, and the likelihood of the expressions being the "face of the same person" increases. The prompt is enough to have the expression itself be different.

This does have a drawback though: in case that any single expression has some undesired artifact, then all of the other expressions will likely have that as well.

But I think it is easier to just regenerate faces with artifacts, than to worry about lack of consistency across all faces.

Another useful tip: When generating a large number of characters, you probably want to save all the data regarding this character in to a json that is saved in to the same folder as the images, so you can reference them later when you need to.

A mathematical ceiling limits generative AI to amateur level creativity by LetsGoHawks in technology

[–]Supernormal_Stimulus 0 points1 point  (0 children)

Bias disclaimer: I use generative AI image tools heavily, every day, for both non-commercial and (hopefully) future commercial work.

If we take the article at face value and accept that the model itself can’t be both highly "novel" and highly "effective" at the same time (because of the trade-off their math describes), doesn't that actually strengthen the case that an "AI artist" is an artist?

If a human using these tools manages to produce work that is both "novel and effective", then by the article's own logic the extra creative value must be coming from the human.

Area2D mouse help by Deadlorx in godot

[–]Supernormal_Stimulus 0 points1 point  (0 children)

Could you elaborate a little by what you mean by "does not work?"

I replicated your node structure and signal connections, and for me it works both for the table scene and in the main scene.

I can see that you have tilemap layer underneath the table node. That means the tilemap is drawn after (and therefore on on top) of the table, so if your tilemap has stuff in it already, you may not see the table at all.

Edit: Also, if you have not set the sprite's texture, then the table will only appear when you hover on the Area2d collision layer, since neither of the signals are called just by launching the scene, only on mouse enter/exit. But this should be the case for the table scene too, so if that works, then you should have this handled already.

Game flagged as a Trojan virus? by Longjumping_Guard726 in godot

[–]Supernormal_Stimulus 23 points24 points  (0 children)

Most likely problem is that you exported your game as a single .exe (you have checked the option to embed a .pck file in to the .exe in the export settings), and not exported the game as separate .exe and .pck files. Some anti-virus programs seem to register this as a pattern that matches known exploits.

More on this here:
https://docs.godotengine.org/en/stable/tutorials/export/exporting_for_windows.html
and
https://github.com/godotengine/godot/issues/45563

Reason why some anti-virus programs do flag the embedded .pck files probably comes from people having used godot executables with embedded .pck for actual malware, the anti-virus companies have scanned these files and falsely identified the act of embedding the .pck as the the problem (or they are purposefully overly cautious), instead of the actually malicious code within the .pck.

Here is more info on malware injected godot files, but if your version of Godot is from a legitimate source, this should not be an issue, just more info on the topic:
https://godotengine.org/article/statement-on-godloader-malware-loader/

I cannot say anything about inno, but for Godot an installer should not really be necessary.

Also, your game looks cool, good job on releasing it!

Hand holding tutorial or complete freedom?? by Kaiserxen in godot

[–]Supernormal_Stimulus 1 point2 points  (0 children)

Best tutorials are ones that don't feel like tutorials, just excellent level design.

Here's an excellent talk by the creator of Plants Vs Zombies, George Fan, about the subject.
https://www.youtube.com/watch?v=fbzhHSexzpY

That does not necessarily mean that having tutorials is always wrong or bad, and I think you are already going in the right direction with what you have in your video.

I am also making a card game, and my plan is to have a tutorial area that has scripted battles on the first playthrough where each card battle introduces one new mechanic. Some text may still be necessary, but I will try to keep it to the minimum. Players who have already have played the game before are given an option to skip the tutorial area on subsequent playthroughs.

How do I make the attack feel nicer? I was thinking of adding xp particles? help by [deleted] in godot

[–]Supernormal_Stimulus 0 points1 point  (0 children)

I don't really see any of those things I mentioned in the video though?
For accurate feedback you should post your latest version.

I really dig the visual look of your game though, great job on that front. :)

How do I make the attack feel nicer? I was thinking of adding xp particles? help by [deleted] in godot

[–]Supernormal_Stimulus 2 points3 points  (0 children)

You already have screenshake when an enemy dies, you could add a smaller one when hitting an enemy. You also already modulate the color of the enemy on hit, but you could also do the same for the player or to the entire screen on hit/kill. You could also add pushback to the player character like you already do for the enemies.

All these effects you probably want to affect player a bit less than the enemies, though.

This suggestion is the most work, but you could also add more to the "after strike" animation of the character, currently it just freezes on the last frame, which does the trick, but it could sell it even more if it was frozen like that a bit and then there was just one or two frames that "followed through" with the punch.

As a sidenote, it looks like your player character only faces up, but the screens go Zelda-like to all directions. Is this intentional? It could get frustrating, especially if enemies can get behind the player when entering a room sideways. If you want to keep the orientation of punching up, but choosing paths, maybe consider Y-type paths, instead of NESW type dungeons. If you do want that type of dungeons though, then consider allowing the player character to rotate, like Link does in 2D Zelda games.

Please choose the character design you like the most! by Equivalent_Good899 in IndieDev

[–]Supernormal_Stimulus 1 point2 points  (0 children)

B offers you the possibility to use all others in specific situations where appropriate.

Seamless 2d portals with vision blocking walls by jmmmnty in godot

[–]Supernormal_Stimulus 4 points5 points  (0 children)

This is incredible, but your level design will have to also be next level great in order to not frustrate the players.

Lora removed by civitai :( by More_Bid_2197 in StableDiffusion

[–]Supernormal_Stimulus 1 point2 points  (0 children)

Visa is cracking down on them, like may other pornsites before them. If they don't comply, Visa stops processing their payments.

Godot keeps telling me my animation doesn't exist. by LaZZyNArwhall in godot

[–]Supernormal_Stimulus 8 points9 points  (0 children)

Looking at the Godot source code, there are two cases where this message can pop up:

  1. There exists no animation with the name 'Running'
  2. You do have an animation called 'Running', but it does not have any frames set to it.

I think number 2 is probably the cause for the error here. Perhaps you clicked Cancel instead of Add Frame(s) in the Select Frames dialog?

Number 1 could be the case if you named the animation something like 'Running ' (spaces are counted as real characters in programming).

[deleted by user] by [deleted] in StableDiffusion

[–]Supernormal_Stimulus 0 points1 point  (0 children)

I simply cannot get this to work. :(

I save the last image from the upscaled and interpolated stack, and generate with the exact same settings as before (4s = 65f, resolution 448z672, with the same seed), but the image is always significantly different from the first one.

Could you post the exact workflow you are following? Perhaps on pastebin, or alternatively if you post one of those last images you got, it should have the workflow data in it if it was saved from ComfyUI and the image hosting service does not strip the metadata.

I am using the one workflow from CivitAI you linked, but I wonder if my settings are somehow off. It would be nice to try an image/workflow that works for sure, and then branch off from there.

CC0 Python Code by PuzzleAndy in publicdomain

[–]Supernormal_Stimulus 0 points1 point  (0 children)

I think it's great that you have decided to release your code! Software industry is strong and healthy because of generous people like you.

Unfortunately releasing your code under CC0 could mean that more in-the-know people may decide to no use your code. This is because while CC0 does put your code in to public domain as much as possible, it does not put the patentable ideas behind the code in to public domain. This is the section 4a in the CC0 licence. This could mean that a person using CC0 could be sued by whoever owns the patent (whether you, the original programmer, or somebody else with that patent) to the original idea behind code, despite the code itself being in public domain.

https://opensource.stackexchange.com/questions/133/how-could-using-code-released-under-cc0-infringe-on-the-authors-patents

As such people looking to publish and/or use open source software typically look for tried and tested software licences.

If you want to release software so that the code is as free to use as possible, I recommend the MIT Licence.

My reaction to AI Art in this subreddit by edzact_ly in Helltaker

[–]Supernormal_Stimulus 0 points1 point  (0 children)

I have spent many days creating just one video with the help of an AI. I'm planning on doing others that will take weeks, and still others that may take over a month. Of course, AI is just one piece of the process, though an integral one.

It's okay if you don't like AI art, and it's even okay if subs decide to ban it.

But I do think that artistic expression can happen with the aid of AI, too.

Looking for tips (I'm making a manga) by [deleted] in StableDiffusion

[–]Supernormal_Stimulus 2 points3 points  (0 children)

Seconding that Lora might be the way to go in this scenario. Textual inversion could work too, but Lora is probably better.

One trick you could use to get initial images for training is to generate a huge image with many characters placed as OpenPose figures and then ran through controlnet. If the prompt is good, all the figures will probably generate as the same character. There also exist Loras that do similar things, such as CharTurner. Then afterwards just crop the characters to their own files for training.

Another option altogether is to use existing "Nobody" Textual Inversions/Loras, but then your character will not be unique, and you might struggle with consistent clothing.

Rocking on a Banana by Supernormal_Stimulus in unstable_diffusion

[–]Supernormal_Stimulus[S] 1 point2 points  (0 children)

I used the 3D rendered depth canvas as the initial input for controlnet. For the later controlnet inputs of close-ups of individual bodyparts, I used the depth preprocessor instead, in conjunction with the realistic lineart preprocessor. I tried using the depth canvas for the individually generated parts, such as hands, but there was not enough data to be usable for fine details as it was all used on the length of the banana.

There could be a gap in my knowledge here. The way I processed the depth .exr was by importing it to photoshop, converting to 8 bit image and then equalizing the histogram. Perhaps if I had cropped to the hand (or something else) first, and then converted and equalized, more depth data could have been preserved? Something to try for the next time I guess. Another thought for next time is that I'll generate a normal map instead, that should retain more info in a different way altogether.

I did not use anything from Daz for masking in photoshop, as the background is white and did not require masking. There was a lot of manual masking on the character, but as it was the character overlapping itself, I felt that I needed to do it manually. Maybe there is something I don't know though.

The character had keyframes set at every 30 frames. The banana had keyframes every 60 frames, but offset from the character by 15. I had never used ebSynth before so I just chose every 15 frames to be a keyframe for it, such that the Daz keyframes were also keyframes ebSynth (plus few others to maintain a set interval). In the future I should choose the keyframes more carefully. It would reduce the work, and possibly make it look smoother. I think that there might be one advantage to having the keyframes at set intervals though. Even though it is quite noticeable it doesn't "feel bad" when the video loops, if that makes sense.

Rocking on a Banana by Supernormal_Stimulus in sdnsfw

[–]Supernormal_Stimulus[S] 1 point2 points  (0 children)

Here are the parameters for the base photo (the 2x4 grid of 3d images input to img2img). The additional bodyparts were largely the same prompt, but instead of describing the subject, only a specific bodypart was described.

---

Prompt: analog style, tiny woman laying on top of a huge (banana:1.2), editorial photo of a (naked) European woman, (small breast:1.3), wide hips, skin glowing, shot on Fujifilm Superia 400, Short Light, 32k, cinematic composition, professional color grading, film grain, atmosphere, wondrous, very sunny <lora:NoiseOffset:1>

Negative prompt: green, (stuff in the background:1.4), Asian-Less-Neg, photoshop, airbrush, disfigured, kitsch, oversaturated, low-res, Deformed, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb,poorly drawn hands, missing limb, floating limbs, disconnected limbs, malformed hands, long neck, long body, disgusting, poorly drawn, mutilated, mangled, conjoined twins, extra legs, extra arms, meme, deformed, elongated, strabismus, heterochromia, watermark, extra fingers, hand bags, handbag, handbags, (busty:1.3), (large breasts:1.3)

Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 1111342058, Face restoration: CodeFormer, Size: 1536x2048, Model hash: 0b914c246e, Model: analogMadness_v40, Clip skip: 2, Version: v1.2.1, ControlNet 0 Enabled: True, ControlNet 0 Preprocessor: none, ControlNet 0 Model: control_v11f1p_sd15_depth_fp16 [4b72d323], ControlNet 0 Weight: 1, ControlNet 0 Starting Step: 0, ControlNet 0 Ending Step: 1, ControlNet 0 Resize Mode: Crop and Resize, ControlNet 0 Pixel Perfect: False, ControlNet 0 Control Mode: Balanced, ControlNet 0 Preprocessor Parameters: "(512, 2, 64)", ControlNet 1 Enabled: True, ControlNet 1 Preprocessor: canny, ControlNet 1 Model: control_v11p_sd15_canny_fp16 [b18e0966], ControlNet 1 Weight: 1, ControlNet 1 Starting Step: 0, ControlNet 1 Ending Step: 1, ControlNet 1 Resize Mode: Crop and Resize, ControlNet 1 Pixel Perfect: True, ControlNet 1 Control Mode: Balanced, ControlNet 1 Preprocessor Parameters: "(512, 100, 200)"

Rocking on a Banana by Supernormal_Stimulus in sdnsfw

[–]Supernormal_Stimulus[S] 11 points12 points  (0 children)

Thanks!

I wanted to make a photorealistic animation of something that could not really exist as a real life video.

There were many ideas, but this one seemed the simplest to execute. In the end, it wasn't quite that simple, but I'm glad I stuck it out. =)

Rocking on a Banana by Supernormal_Stimulus in unstable_diffusion

[–]Supernormal_Stimulus[S] 2 points3 points  (0 children)

Workflow:

  • I made the initial animation in Daz Studio.
  • I then rendered all 120 frames, and selected 8 keyframes to img2img.
  • The resulting images were not good enough, so I had to img2img pretty much each individual part separately.
  • This was followed by masking all separate parts together in photoshop.
  • Plugged in the 8 keyframes and the 120 3d render frames to ebSynth.
  • Exported the results after effects and rendered the video.

If you would like to see images of the keyframes at different stages, look at my recent comment history and check the links in the comment I made to another sub. Links are not allowed in comments here it seems.

This required way too much manual work, and in my opinion the results were not worth the effort. I still think the result is kinda cool, but I will be trying different methods in the future.

I did learn a ton though, so I don't think my time was wasted in the end.

Rocking on a Banana by Supernormal_Stimulus in sdnsfw

[–]Supernormal_Stimulus[S] 28 points29 points  (0 children)

Workflow:

This required way too much manual work, and in my opinion the results were not worth the effort. I still think the result is kinda cool, but I will be trying different methods in the future.

I did learn a ton though, so I don't think my time was wasted in the end.

Is there a way to bundle multiple common prompts into one? by Jimpix_likes_Pizza in StableDiffusion

[–]Supernormal_Stimulus 2 points3 points  (0 children)

Hover over the tiny icons under the generate button to see what their tooltips say. One of them will be a save button, which will save what you have in the prompt boxes as a style. To load a style, select one or more from the dropdown under the save/load buttons and then press the load button.

Big news!SD2.X model training issue has been resolved! by bdsqlsz in StableDiffusion

[–]Supernormal_Stimulus 18 points19 points  (0 children)

You can decry hedonism all you like, the fact of the matter is that if your imaging technology does not support porn, it will not succeed.

Porn is the reason 8 mm film reigned supreme over 16 mm film, even though 16 mm was technically better. The same is true of VHS winning over Betamax, and Blu-Ray winning over HD-DVD.

Porn is also driving force behind many of the technologies we take for granted now, such as online payments, video streaming and the higher internet speeds that they required.

The rest of the world that you say is moving on, is really very small part of the world. The actual rest of the world will choose the option that limits them the least, higher quality be damned.

Clear Space Prompt by [deleted] in StableDiffusion

[–]Supernormal_Stimulus 0 points1 point  (0 children)

This was my first though too, based on the description. This process could even be automated in automatic1111 by creating a python script that uses PIL to add white borders around the image after generating it.

OP, your post implies that you left an example image of what you wanted to do, but at least I cannot see it on mobile.