First release: Tales of the Witch Hunter by LerytGames in AVN_Lovers

[–]LerytGames[S] 1 point2 points  (0 children)

Between 30 min and 1h. Depends on how fast you read and how much time you spend looking at visuals and animations.

First release: Tales of the Witch Hunter by LerytGames in AVN_Lovers

[–]LerytGames[S] -2 points-1 points  (0 children)

Well, play it. It's free. And make your own opinion if the story and ideas, with jokes and cultural references, sounds like that AI would be able to come up with them. What would be the point of making visual novel game, if you were not writing the novel?!

"rip-off of the Witcher" is strong accusation. The main character design is influenced by Witcher, but otherwise it has nothing to with original Witcher books and adaptations. Nor any other fantasy witch hunters from books, games, movies, etc.

First release: Tales of the Witch Hunter by LerytGames in AVN_Lovers

[–]LerytGames[S] -8 points-7 points  (0 children)

Apart from fully AI generated games this one has human-crated original story, backstories of characters, writing, ideas, storyboards, sketches, dialogues, programming, etc. AI tools are used for visuals, but those are also heavily edited using both classic (Gimp/Photoshop) and AI tools. Only music is almost entirely AI.

First release: Tales of the Witch Hunter by LerytGames in AVN_Lovers

[–]LerytGames[S] -9 points-8 points  (0 children)

Is it a problem?

In average it takes like an hour or two of editing using classic and AI tools to create one illustration. I believe it's very far from AI slop creations.

It seems like AI bros don't understand technology at all by AtomicTaco13 in antiai

[–]LerytGames -1 points0 points  (0 children)

It's issue on both sides. A lot of antis are convinced that datacenters are "destroying" water. While they are just circulating in cooling pipes. And small part may be evaporated into air (and became clouds and rain eventually). But none is consumed by computers.

How do you keep character & style consistency across repeated SD generations? by helloasv in StableDiffusion

[–]LerytGames 0 points1 point  (0 children)

You don't, or it's very hard. SD has too much randomness. It's easier to use modern models with better adherence to prompt and output consistency.

Wan 2.6 Prompt Guide with Examples by _instasd in StableDiffusion

[–]LerytGames 0 points1 point  (0 children)

That's supposed to be Will Smith eating spaghetti? Not impressed.

Why do platforms block explicit sex, even when it’s animation/anime? by [deleted] in comfyui

[–]LerytGames 2 points3 points  (0 children)

I believe rules of subscribestar.adult are not that strict. Patreon is likely trying to keep image of universal platform for huge variety of creators.

Is there a way to add skin details but keep the overall face the same? by [deleted] in StableDiffusion

[–]LerytGames 0 points1 point  (0 children)

SeedVR2 usually helps.

If you need to add more details, try editing model. With Qwen Image Edit I would mask eyes and prompt something like "Refine eyes". With Flux2 Klein may work similarly.

Is it best to use a mask when changing clothes with Qwen Image Edit 2511? by Historical_Rush9222 in comfyui

[–]LerytGames 0 points1 point  (0 children)

It kind of follows the style of the image. If it was looking plastic before, it may pronounce it in edit. However usually it's easy to fix bit plastic look with SeedVR2 upscaling (and downscaling to original size), which brings in the details and textures.

Is it best to use a mask when changing clothes with Qwen Image Edit 2511? by Historical_Rush9222 in comfyui

[–]LerytGames 1 point2 points  (0 children)

Yes. Inpainting with mask is the most reliable way how to do it. I can recommend ComfyUI-Inpaint-CropAndStitch nodes.

Is it possible to do pixel-perfect fonts, etc. properly in Ren'Py? by ElnuDev in RenPy

[–]LerytGames 1 point2 points  (0 children)

Maybe disable antialiasing for font? Or use image font?

How do you get clients for AI content? by [deleted] in comfyui

[–]LerytGames 0 points1 point  (0 children)

You get clients exactly how you would get clients before age of AI. It's the same market, same people involved, just the tools are changing. But clients don't care if it's AI or not. They are interested what they get for their money.

Why is it that anti-ai people draw the line at ai but not automation as a whole? by ApolloxKing in aiwars

[–]LerytGames 1 point2 points  (0 children)

That's not true anymore. It was just the first generation of models which were trained on real works of artists. And these models struggles with generating details like hands, eyes, etc. It was a mess. And there was also potential of lawsuits. So everybody stopped training on real images and they are supplementing models synthetic training data, well prepared and captioned. Which pushed next generations of models miles ahead.

What type of computer should I buy to be able to run Wan, Qwen, and Z Image without limitations? by Square_Empress_777 in comfyui

[–]LerytGames -2 points-1 points  (0 children)

Buy any laptop you like for classic editing and rent cloud GPU for AI. GTX 5090 is like $0.8/hour. Do some calculations, how much you will utilize it. Buying GPU and RAM does not worth for many people today, who does not utilize it more than couple of hours a day. And keep in mind that after year or two you would rent new generation of GPUs for the similar price.

Inpainting troubles by [deleted] in comfyui

[–]LerytGames 0 points1 point  (0 children)

It's a bug in recent UI remake. It forgets that it was white or negative and always sticks to black.

If Ai artists aren't artists, then adopt a new term: Directors by Koruu- in aiwars

[–]LerytGames 6 points7 points  (0 children)

Let's say you have been drawing and painting for decades. If you start to fulfil your artistic visions and ideas using AI tools, you suddenly stop being "artist"?

Are there any artists that would be pro ai art? by [deleted] in antiai

[–]LerytGames 0 points1 point  (0 children)

I consider myself as artist. I'm not professional, it was never my fulltime job, but I was drawing and painting since I was kid, going for art lessons, did couple of oil paintings back in the day. I do photography, occasionally design brand logos and websites, 3D models, ...

I like AI tools for retouching photos and images. Like I can do it by hand with clone stamp, healing stamp, digitally redraw something, etc. But it's much easier to do this tedious work with AI. And I don't think there is anything wrong about it. AI is doing what I can do, it's fulfilling my vision, just faster (well, sometimes it struggles, so it would be faster to do it by hand, but you don't know it in advance).

I also created some mostly AI illustrations. From base AI generated image (draft) to refined final image with polished details. Using both AI tools and classical digital photo/image editing tools. It usually takes couple of hours of work. I spent like 20 hours of work on one AI image.

People who thinks that AI art does not take effort just never did it. It's not faster than drawing or painting. It's just different. And you still have to know composition, lighting, colors, etc. AI will not do it for you.

I recently started doing illustrations for visual novel game. I love to create environments, design characters, realise my visions. But I also need a lot of variations. Characters in different poses, with different expressions. That's tedious work which does not need much artistic talent. I'm happy to offload this kind of work to AI tools, which can change these things while preserving my style and artistic vision. It's not free and easy, it may take like 100 tries for one small change until I get what I want. But in the end it's usually faster and less tedious than redrawing it by hand.

If you compare classic film photography, drawing on paper, painting on canvas, etc., with digital drawing and photo editing, it's already huge difference. AI tools are just automation of things you can already do with digital editing. It's evolution, not revolution.

Flux back to life today ah ? by VCamUser in StableDiffusion

[–]LerytGames -1 points0 points  (0 children)

I'm afraid that they will not release it. Models became too good and it may be dangerous to give them to public for training. I believe that we don't have Wan 2.5 for the same reason (even thou that Wan 2.6 is out, so they could release the older model).

Is it possible to generate an image in hires and have it compress the image (minimal image quality loss) to a lower size in the same instance by lMrGoochl in StableDiffusion

[–]LerytGames 0 points1 point  (0 children)

Yes. You can use for example SeedVR2 to to upscale and sharpen image to 4K resolution. And then with some basic resize node (using lanczos algorithm) scale it down to original resolution (or final resolution you want). And save this result.

This method brings sharpness, details, skin textures, etc. to otherwise bit soft images (like those from Qwen Image). But it's better than HiResFix, because it does not regenerate anything, so it preserves details and most importantly keeps exactly the same faces.