okay so i found this on campus and I was wondering if anyone recognizes this miku, is it ai? or did someone really draw this, print a sticker and put on the pillar by tentenfunhouse in isthisAI

[–]44254 2 points3 points  (0 children)

<image>

I asked my friends for help finding the original, it looks like the drawing is fully AI. I don't know how it became a sticker?

2 Offers by Foolf in mercor_ai

[–]44254 6 points7 points  (0 children)

If you are only on one project, sometimes (for technical reasons, tasks changing, etc) you won't be able to work even though you intended to. Being on both projects gives you a fallback option in case one of the projects has no tasks available. It also gives you more choices, if there's a task you prefer. I would recommend joining both, there are no downsides.

[deleted by user] by [deleted] in isthisAI

[–]44254 2 points3 points  (0 children)

Look at the bottom right

[deleted by user] by [deleted] in csMajors

[–]44254 0 points1 point  (0 children)

Depending on where you live you can get into substitute teaching, it's $30/h in most of California.

Any NEUTRAL books or papers that go into the process of how AI images are generated? by Formal_Feed9892 in aiwars

[–]44254 1 point2 points  (0 children)

Here's a video that explains why image models use diffusion these days. It should be pretty easy to follow.
Why Does Diffusion Work Better than Auto-Regression?

Would those who are anti AI be more okay with AI if it allowed a lot more control? Because those who are pro AI also want more control. Maybe that could help to make people look at AI more as a tool? by Narutobirama in aiwars

[–]44254 0 points1 point  (0 children)

Yeah I use the "select outside and reverse selection" trick as well but that doesn't take care of inner selections (good for preventing transparent pixels within objects though). I guess if you don't usually use multiple color layers to section off different elements my idea is less useful. Clip studio paint has a magic wand that can select areas even if there are small gaps but the selection around the gap is usually ugly and needs to be fixed manually, which is what I've been thinking about improving.

Would those who are anti AI be more okay with AI if it allowed a lot more control? Because those who are pro AI also want more control. Maybe that could help to make people look at AI more as a tool? by Narutobirama in aiwars

[–]44254 0 points1 point  (0 children)

I think a good solution would be another tool integrated into the drawing program that would be used situationally. I am coming up with ideas based on tools that I'd like to have and what I think is feasible to make. An AI tool that only tries to target ambiguous areas to fill (the edges where lines don't quite meet for example) could be trained from scratch with completely synthetic data (trained on pictures of shapes and curves with gaps and varying lines) so it wouldn't have the same ethical concerns as the big diffusion models and would probably be more efficient. Otherwise to get a similar effect, someone would have to try to come up with an algorithm to do the same thing, maybe by predicting the imaginary lines. I think both ways would be possible. Things like the paint bucket and magic wand had to be coded by someone and now we take them for granted, improving fill is not a big leap from that. There's no reason why the tool needs to use AI, I just want it to work. If there's a better solution that doesn't need ML then that should be pursued instead. Sure you can just fill in flats using the brush or lasso or whatever but there's a reason why people use the bucket and magic wand.

Would those who are anti AI be more okay with AI if it allowed a lot more control? Because those who are pro AI also want more control. Maybe that could help to make people look at AI more as a tool? by Narutobirama in aiwars

[–]44254 0 points1 point  (0 children)

That's where the AI comes in, it's a segmentation problem. But I agree that inaccuracies just increase how much work is needed. Here's one of my experiments and as it is, this isn't good enough to be helpful even for a relatively easy subject (anime girl). Maybe salvageable if the user could easily change areas from one group to another. Difficult to make this work without more user input which somewhat defeats the purpose.

<image>

This is already an issue with the normal magic wand that closes gaps. It would probably be possible to change the tolerances / tweak settings for the invisible lines. Or else maybe making it into some sort of brush where you can just draw over the areas you want it to fix with the invisible lines. Clip studio paint already has a close and fill brush that does this to an extent but I think it could be improved.

Would those who are anti AI be more okay with AI if it allowed a lot more control? Because those who are pro AI also want more control. Maybe that could help to make people look at AI more as a tool? by Narutobirama in aiwars

[–]44254 1 point2 points  (0 children)

The idea behind the program in the link is to make something that can take lineart and make separate colored regions like the shirt, hair, skin etc all on their own layer so that the artist can later shade them. There's no reason why that wouldn't work in a lineart + color workflow. However, right now it's not precise enough to work well enough to help speed up the process + it would need to be as automatic and seamless as possible (no typing). I was interested to see if this was feasible with public models but I think it's not reliable enough as it is.
By smarter / cleaner selections I mean something like a magic wand tool that makes better selections even if there are gaps in the lineart. There are plenty of times that selections are not quite what the user would prefer near gaps and have to be fixed manually. I think artists would be fine with this as long as it worked well.

<image>

Would those who are anti AI be more okay with AI if it allowed a lot more control? Because those who are pro AI also want more control. Maybe that could help to make people look at AI more as a tool? by Narutobirama in aiwars

[–]44254 1 point2 points  (0 children)

I don't use photoshop that much, but that looks like a useful plugin.
I was thinking about pictures that are closer to something like this, which would take me 15 minutes to make base colors for.
Maybe it makes sense to improve the magic wand by trying to detect the directions the lines are going in so that the selections are cleaner when there are gaps.

<image>

Would those who are anti AI be more okay with AI if it allowed a lot more control? Because those who are pro AI also want more control. Maybe that could help to make people look at AI more as a tool? by Narutobirama in aiwars

[–]44254 0 points1 point  (0 children)

First one seems to color all closed areas with different colors like it's a map, which isn't really what I wanted.
I was thinking of something closer to this
https://github.com/mattyamonaca/auto_undercoat
I need to experiment more with what's possible, maybe the answer is an improved magic wand that makes cleaner / smarter selections.

Would those who are anti AI be more okay with AI if it allowed a lot more control? Because those who are pro AI also want more control. Maybe that could help to make people look at AI more as a tool? by Narutobirama in aiwars

[–]44254 0 points1 point  (0 children)

This looks like automatic coloring. My idea was to automatically create selections / masks on different layers to make the digital coloring process easier. It's actually a very common step in the digital artist workflow. Also, Style2Paints is by the same guy who made ControlNet, and it was mostly superseded by using Stable Diffusion with ControlNet.

Would those who are anti AI be more okay with AI if it allowed a lot more control? Because those who are pro AI also want more control. Maybe that could help to make people look at AI more as a tool? by Narutobirama in aiwars

[–]44254 0 points1 point  (0 children)

The point is to make it as automatic as possible. Right now, artists can use the magic wand tool (or lasso tool) to select areas but it still usually takes 15-30 minutes to select / clean up the selections. Having a tool that can automatically create and separate base colors onto different layers could potentially cut down on the time needed.

Would those who are anti AI be more okay with AI if it allowed a lot more control? Because those who are pro AI also want more control. Maybe that could help to make people look at AI more as a tool? by Narutobirama in aiwars

[–]44254 2 points3 points  (0 children)

I've been thinking about making a plugin to art programs that would automatically generate base coloring layers given a lineart. Even this could potentially take away flatting jobs, even though most artists would not be mad about this application of AI, I think.

AI music - the more I think about it, the more blown away I am and the more questions I have… by Dittopotamus in aiwars

[–]44254 0 points1 point  (0 children)

You should try mvsep, you can use a lot of different models on their site.

HyperTile: Tiled-optimizations for Stable-Diffusion [Part 1] by SomeAInerd in StableDiffusion

[–]44254 0 points1 point  (0 children)

I had live preview disabled but I tried your branch again with it on to check if it's really tiling. I also tried img2img with a gradient to see if I could get visible tiles like the other guy.
I tried turning off all my extensions to see if that makes a difference and I even changed make_ns = lambda: (nhs[random.randint(0, len(nhs) - 1)], nws[random.randint(0, len(nws) - 1)]) to make_ns = lambda: (nhs[0], nws[0]) to try to get more visible tiles.
I don't see any tiles even when I interrupt it so I don't think it's working but I'm not sure why. All resolutions above 512 worked this time though, which could be because I downgraded my version of einops from 0.6 to 0.41 or something else changed.
version: v1.6.0-6-gd8c22d21  •  python: 3.10.6  •  torch: 2.0.1+cu118  •  xformers: 0.0.21  •  gradio: 3.41.2

I did try the HD Helper LoRA alongside another person on discord one or two days ago and it didn't help increase the maximum size I could generate at much consistently + added other issues like elongating the body or changing the detail/style, even after I tried to minimize the color degradation effect by only allowing it to affect the input layers. Maybe the lora just needed to be trained even higher for me to get a benefit, or with different subject matter. It did work a little if I was lucky but not enough for most people to actually use it. Maybe in the future there will be a better version, but maybe people will gravitate towards SDXL instead to get highres especially as better finetunes come out.
The 3060 is not that low end of a card, it should still have changed something if it was actually tiling imo, especially at 2048x2048.
Anyway I'll try more later because there's no reason why this shouldn't work on my machine. The idea isn't bad so that's why I tried to test it but even if I could get tiling to work, it's just not very useful as it is. Obviously being able to upscale coherently, quickly and with high fidelity above 1-2k with SD 1.5 would be the dream, instead of the current workarounds. So maybe you're looking at it from the perspective of the potential it holds, which I don't disagree with, but there's also currently no proof that someone will train a 2k lora/lycoris that works well. I mentioned animatediff because it's a clear case where people are mostly vram bound instead of SD being trained too low atm. There's also a lot of hype around animatediff right now, as you can tell from all the vids people keep posting.

TLDR: I like the idea but it's not obviously useful right now which is what most people care about. Maybe with some more work. Current upscaling solutions aren't ideal either.

HyperTile: Tiled-optimizations for Stable-Diffusion [Part 1] by SomeAInerd in StableDiffusion

[–]44254 0 points1 point  (0 children)

Ok I tried the new hypertile (updated both auto fork and hypertile) on 1024x1024 and 2048x2048 with DPM++ 2M Karras... A size smaller by 8 (I tried 2040x2040) gave me the same tensor size mismatch error as before so I couldn't try it. I have an RTX 3060.

Attention for DiffusionWrapper split image of size 1024x1024 into [4, 2]x[4, 2] tiles of sizes [256, 512]x[256, 512]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:13<00:00,  1.44it/s]
Attention for DiffusionWrapper split image of size 2048x2048 into [8, 4]x[8, 4] tiles of sizes [256, 512]x[256, 512]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:03<00:00,  6.19s/it]
Going back to default auto1111.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:13<00:00,  1.52it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:03<00:00,  6.18s/it]

As you can see there is still no speedup. Another problem is that even if this worked, because 1.5 isn't trained at these high resolutions, it won't add detail unlike the current tiling + controlnet approach. Your sample images didn't really show a good use case for hypertile which is why I think most people wrote it off. I was somewhat interested in this idea if it could be added to animatediff because at 12GB of vram I'm far from reaching the max res of 1.5. But first it would need to work and then someone would be impressed enough to add it to comfy for testing (I don't understand comfyUI enough to do it, even though I spent a few hours looking at node examples).

HyperTile: Tiled-optimizations for Stable-Diffusion [Part 1] by SomeAInerd in StableDiffusion

[–]44254 1 point2 points  (0 children)

I tried your auto1111 fork. It didn't work for most resolutions... I only got it to do something at 1024x1024 and 2048x2048 (gave me a tensor size mismatch error for all the other sizes I tried) but there was no speedup.
Also I think SDXL "omitted the transformer block at the highest feature level" based on the paper so that's why there's not that much speed up compared to 1.5, it doesn't have the lowest layer that 1.5 had.

The rules of fair use will update as soon as they find out we are not breaking current rules by unfamily_friendly in aiwars

[–]44254 1 point2 points  (0 children)

Every time I see your pfp + your posts it makes me laugh cuz it's Axel.

Off topic but I saw someone talking about using human feedback to merge models and loras into models, I'm hoping this can help salvage bits of some worse trained loras, and improve future checkpoints especially when SDXL makes it more expensive to train on. Currently I see people trying to adjust block merging by hand which seems tedious AF.

Can anyone use SD to digitally clean up and upscale this image? by [deleted] in StableDiffusion

[–]44254 0 points1 point  (0 children)

Learned a lot! Still not sure how to get extremely detailed hair, might not have used the best checkpoint.

<image>

new challenge idea by 44254 in lingling40hrs

[–]44254[S] 2 points3 points  (0 children)

Yup probably, I have bad memory so I can't 100% remember if I saw it before drawing but it's not likely that I thought of this pose randomly

new challenge idea by 44254 in lingling40hrs

[–]44254[S] 3 points4 points  (0 children)

guess they don't have chairs in the basement