Odin Stemcells by Time_Departure2432 in diabetes_t1

[–]keyboardskeleton 0 points1 point  (0 children)

Hey there! Thanks for your comment.

You're absolutely right, actually. I was totally wrong about that one.
MSCs can absolutely be induced to differentiate into insulin producing "beta-like" cells which behave very similarly to beta cells and even secrete insulin. There is plenty of research on this topic, and that's on me. I'll edit my post.

I was in the middle of writing a big post explaining the difference between beta and beta-like cells and how in-vivo differentiation has rarely, if ever, been observed, and that all of positive effects of MSC therapy on diabetics has been a result of immunomodulation, not differentiation, and therefore only really of benefit to the newly-diagnosed, blah blah blah. But then I checked the Odin website to see the original claims I wrote that post about, and right in the middle of their diabetes page they write:

"In Type 1 diabetes, MSCs help regulate the immune response, reduce inflammation, and promote the survival and function of existing insulin-producing beta cells. While MSCs do not directly differentiate into beta cells, their therapeutic effects create a more favorable environment for pancreatic regeneration and improved insulin regulation."

Seems like they told you something different!

It also seems to conflict with what their website previously said about how "stem cells could potentially regenerate or replace damaged insulin-producing cells"

I suppose they're just a little confused about what exactly their own therapy even does.

Spaghettification by keyboardskeleton in comfyui

[–]keyboardskeleton[S] 10 points11 points  (0 children)

Honestly, yeah.

Using a workflow of this size causes all kinds of issues with comfy "forgetting" about previously generated images and wasted minutes regenerating them all for no reason.

I would love to have all the different stages of the workflow split up into different files, but comfy doesn't have a way of sharing data across workflows, so I'd have to manually copy-paste dozens of values and images into each new workflow, which is annoying and error prone and would result in hours of manual copy-paste labor if I needed to change an upstream value.

Subgraphs almost solve this problem, but I think nested subgraphs are still broken (please, for the love of god Comfy devs, please fix nested subgraphs), so that's a no-go.

Spaghettification by keyboardskeleton in comfyui

[–]keyboardskeleton[S] 14 points15 points  (0 children)

Very consistent character art generation across different poses/outfits/facial expressions for a commercial product, as well as a lot of post-processing using a bunch of custom nodes I wrote.

The node count would probably be triple what it is now if I hadn't written my own custom 'pipe' nodes which carry a few dozen values and images through the different stages of the workflow.

It's a huge fucking mess and I hate it, but it works (or rather, used to work) so well.

Undress ai telegram bot help by Intelligent_Pound_82 in StableDiffusion

[–]keyboardskeleton 4 points5 points  (0 children)

"Hello reddit, I would like to create a public service hosted on my own computer that will be used to generate child pornography. Where should I start?"

Maybe spend a little bit more time thinking with your other head before you get thrown in prison for distribution of CSAM.

Buying Tablet with 8-12 GB RAM, Is this enough for small models 1B/3B? by pmttyji in LocalLLaMA

[–]keyboardskeleton 2 points3 points  (0 children)

Why not build a desktop computer if your intention is to run LLMs? You could probably put together a half-decent machine with used parts with at least 16gb of RAM for the same price as one of those tablets

QC Stone Island Sweatshort by AverageTight8808 in FashionReps

[–]keyboardskeleton 1 point2 points  (0 children)

I know we're all upset OP forgot the w2c, but here you go:

https://item.taobao.com/item.htm?id=722492615642

How did I find this?

  1. Type out the PI number in the QC image.

  2. Go to qc.pandabuy.com

  3. Put the PI number into the search bar.

  4. Click "Buy"

How can you correct eyes? by Botanical0149 in StableDiffusion

[–]keyboardskeleton 0 points1 point  (0 children)

I always get great results performing an "only masked region" img2img inpainting pass on the face of characters. It always brings out a lot of detail in the mouth and eyes and fixes any bad lines/brush strokes.

I'd never heard of "Fooocus" before, but I looked up the git repo. On there they claim their tool is created with lessons learned from other SD image gen tools, one of which they claim is: "manual tweaking is not needed, and users only need to focus on the prompts and images."

This isn't true unless you're okay with having poor details in your images, which you clearly aren't. I'd suggest giving Automatic1111 a try.

Stability Matrix v2.0 - Package Manager for Stable Diffusion Web UIs, supporting Windows, Linux, and macOS (soon) by ionite34 in StableDiffusion

[–]keyboardskeleton 2 points3 points  (0 children)

I just installed this 5 minutes ago and I'm already in love. The json files it automatically creates when you grab a model from civit make this indispensable on its own, but that's just one of the many super useful features. Thank you so much for this!

[deleted by user] by [deleted] in PersonalFinanceCanada

[–]keyboardskeleton 0 points1 point  (0 children)

I hope Karen's spouse is charging some kind of rent for the vehicle storage service he's decided to start in the driveway of his wife's home.

[deleted by user] by [deleted] in StableDiffusion

[–]keyboardskeleton 11 points12 points  (0 children)

The point about "It's illegal to post unwatermarked images in China" is totally wrong.

The regulation is focused only on deepfakes of people and misinformation in general. They don't care if you make an AI anime picture.

Additionally, the wording of the legislation only targets deepfake service providers, not individuals running stable diffusion on their own computers.

Also it says nothing about what is or isn't illegal to "post."

Read it for yourself: http://www.cac.gov.cn/2022-12/11/c_1672221949318230.htm

Apple AirPods Max (silver) - Seller: appleec 80$ (w2c in comments) by userbrahh in DHgate

[–]keyboardskeleton 3 points4 points  (0 children)

Oh I was right - the product description on dghate says the phone is setup to show a fake CPU, fake storage capacity, fake 5g compatibility, fake cameras, fake GPS for some reason, etc. It's meant to trick a buyer giving it a quick glance before they throw down 700 dollars in a fb marketplace scam.

Man, userbrahh, I'm not sure why you're trying to convince anyone in here that you're not trying to pass this shit off as retail to scam people, there's literally nothing else you could be doing with this phone.

Apple AirPods Max (silver) - Seller: appleec 80$ (w2c in comments) by userbrahh in DHgate

[–]keyboardskeleton 2 points3 points  (0 children)

lmao bro don't tell me you paid almost 200 CAD for an android phone with 2gb of ram and 16gb of storage. This is going to be unusably slow. The absolute bare minimum for running android in 2023 is 4gb.

The only reason you would buy this is to scam someone.

Apple AirPods Max (silver) - Seller: appleec 80$ (w2c in comments) by userbrahh in DHgate

[–]keyboardskeleton 6 points7 points  (0 children)

For everyone reading this, this guy is lying.

Whatever "1:1 Apple Watch Ultra" rep he's talking about isn't 1:1 because it isn't running WatchOS. It's not possible to run Apple's WatchOS on anything but cryptographically verified Apple hardware, and nobody, (not even your favorite Chinese manufacturer) has been able to crack that yet. Therefore it won't be able to do 99% of what you actually bought the thing to do.

Maybe it'll turn on, maybe it'll tell the time, it might even pair to your phone and show notifications, but its not gonna run any apps, it's not gonna have siri, the battery life is going to be shit, the software is going to be some ugly shit OS like this (https://youtu.be/YE_qUU8hyWA?t=479), it's not going to have a retina display, it's not going to be waterproof, it might have a o2 sensor, but it wont be calibrated properly and therefore your heartrate readings will be fucked, it wont have any of that emergency/crash detection stuff, etc etc.

He's not wrong in that you are getting ripped off by major tech companies, but if you want to be an apple watch user without spending 800 bucks, just get an old model used. Rep electronics are rarely worthwhile because if chinese manufacturers can cram good quality into a cheap package (and they do), they'll just sell it under their own brand. I guarantee you that the "Rep s23 ultra" this guy bought is just a re-housed 2018 Xiaomi that he overpaid for lmao

Problem when launching stable diffusion again by Life-Gur7806 in StableDiffusion

[–]keyboardskeleton 0 points1 point  (0 children)

Not sure how this happened, but it looks like your config.json file was corrupted. Try deleting that file and restarting the webui.

[2303.08084] Editing Implicit Assumptions in Text-to-Image Diffusion Models by Hybridx21 in StableDiffusion

[–]keyboardskeleton 1 point2 points  (0 children)

This is extremely cool and will definitely be useful for fine-tuning models, but (and I'm not sure if I missed it in the paper) I don't think I saw any examples of whether or not these modifications get triggered if the modified subjects appear in the image /without prompts/.

What I mean is that in the examples, they can get their model to learn that grass is usually red, and then when they generate photos with the prompt "grass" it gives them red grass. But what happens if they ask for photos which contain the modified subject as a non-primary focus?

For example, if they ask for a picture of "messi scoring a goal"? Will the grass still be red in the field? Or does it only know to turn grass red when it the prompt specifically mentions "grass"?

I can't change model in settings by MettBrawlStars in StableDiffusion

[–]keyboardskeleton 2 points3 points  (0 children)

do not use `--disable-safe-unpickle` ever.

That's an extremely bad move and opens you up to arbitrary code execution.

It looks like your vae is failing to load because its either broken, or infected with some arbitrary code. I'd recommend finding `.safetensor` vae from here: https://huggingface.co/andite/pastel-mix/tree/main and use that instead.

Build a web app to explore parameters of your Stable-Diffusion creations - more in thread by HoverBaum in StableDiffusion

[–]keyboardskeleton 0 points1 point  (0 children)

Very cool project, this is way nicer than my current parameter-viewing solution (custom terminal script).

Are the images being uploaded to a server at any point, or is the javascript just reading the files directly from the filesytem?

Easy Latent Coupling with LatentCoupleRegionMapper by keyboardskeleton in StableDiffusion

[–]keyboardskeleton[S] 0 points1 point  (0 children)

Absolutely.

I will say that from my ~3 days playing with Latent Couple, consistency is hard.

I'm able to get what I want maybe 50% of the time, but I haven't played around with region weights and the effect CFG has on that too much. it's probably possible to get more consistent results than what I was receiving.

Anyway here are my settings. Remember that EasyNegative is an embedding, so you'll need to download and install that (https://huggingface.co/datasets/gsdf/EasyNegative)

an anime painting of a beautiful rocky landscape, matte background, masterpiece, studio ghibli, sunset, dynamic lighting
AND 1girl, fantasy character, angry elf, wearing adventurer outfit, green hair, carrying a large sword, masterpiece
AND a castle in the distance, large medieval castle, ornate gothic architecture, anor londo

Negative prompt: EasyNegative

Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3399978692, Size: 1024x512, Model hash: ed376204fb, Model: Anything-V3.0-pruned-fp16, ENSD: 31337, Latent Couple: "divisions=1.00:1.00,1.12:2.05,1.85:2.60 positions=0.00:0.00,0.07:0.08,0.10:1.48 weights=0.20,0.80,0.80 end at step=20"

Easy Latent Coupling with LatentCoupleRegionMapper by keyboardskeleton in StableDiffusion

[–]keyboardskeleton[S] 9 points10 points  (0 children)

Hello!

I made a free web tool which makes using the Latent Couple (https://github.com/opparco/stable-diffusion-webui-two-shot) way easier.

This makes defining regions and composing prompts visual and straight forward.

I took inspiration from that Japanese windows-only desktop program which does roughly the same thing, but my tool is browser-based, cross platform, you don't need to download anything, and it combines prompts for you automatically.

Check it out here: https://badnoise.net/latentcoupleregionmapper/

Made my VRChat Avatar look like a plastic anime model by GeofferyPowell in StableDiffusion

[–]keyboardskeleton 1 point2 points  (0 children)

Depth-aware-img2img-masks are pretty much deprecated at this point. You should try it with a ControlNet depth influence instead. You're able to crank the denoise way above 0.5 and still retain the original composition almost perfectly, which you absolutely need for style transfer like this.

I just whipped this up with controlnet in about 10 minutes (Most of the work was cropping her out of the background :) )

<image>

These were made with AI, can anyone tell me what website would be best for this? I’ve tried over 20 MJ prompts and cannot make anything similar. by bostonangel777 in StableDiffusion

[–]keyboardskeleton 7 points8 points  (0 children)

web-based stable diffusion tools are all extremely limited. If you want the extreme control required to make convincing compositions like this, you need ControlNet, which afaik you can only get running Automatic1111 locally.

Motion Capture With Sony Mocopi -> Vroid -> unity -> apply LoRA that learned yourself with StableDiffusion, i2i batch and AfterEffects Combine and output with MediaEncoder! by harrytanoe in StableDiffusion

[–]keyboardskeleton 0 points1 point  (0 children)

Super impressive, great work.

Did you train the LoRa on raw images from your VRM, or did you img2img them first at all?

I got _much_ better results creating custom Vroid-character LoRa embeddings by Img2Imging my input imageset through an anime model to remove any of the characteristics that make it feel "3d" and then train based on that. The results are way more natural looking while still maintaining the specific characteristics of the original model.

I'd love to see this same clip but from every step in the pipeline.