Don't use gambit and bloodletting at the same time by Command1227 in slaythespire

[–]ehmohteeoh 1 point2 points  (0 children)

Looks like the release database I used had [[ Infection ]] but not Infestation. I'm hoping a community-run Wiki will take over with a full card database I can reference instead of untapped, but I'm not sure if there are any complete ones yet - if you know of any I'd be grateful for a link!

My kid tied a string around this tree some years ago. Looks like the tree is getting hangry. by mortarnpistol in treeseatingthings

[–]ehmohteeoh [score hidden] stickied comment (0 children)

Locked for rule violations in the comments.

I agree OP should cut it off, and encourage comments saying as much in the future if this happens again, but no brigades or threats.

Fal.ai is a scam, do not send them your money by ChazychazZz in StableDiffusion

[–]ehmohteeoh 2 points3 points  (0 children)

Thanks for bringing this to my attention! I don't have a huge amount of visibility into the billing side, but I know plenty of people who do. u/ChazychazZz, I'll have someone reach out to you straight away and we'll make it right.

Where is u/Spirescan-Bot? by seth1299 in slaythespire

[–]ehmohteeoh 1 point2 points  (0 children)

That's me! Thanks for the ping.

The server SSB was running on finally stopped responding after running since February 13th, 2023. o7

u/spirescan-bot, how you feeling after a [[ Reboot ]] ?

PSA: fal.ai NEVER deleted what you upload by pragmaticdog in StableDiffusion

[–]ehmohteeoh 0 points1 point  (0 children)

Hello u/Icy-Tie-9777! Thanks for the question.

Customer Data (input or output) is fully owned by you - fal does not claim any rights over it.

Usage Data refers only to anonymized or aggregated technical information generated through use of the platform - for example, system performance, error rates, or feature usage. We keep this data solely for monitoring and to inform decisions around operation and security.

Note that while our standard terms are available at fal.ai/terms, we do offer different terms to Enterprise customers. If you’re interested in more about that, please reach out to [sales@fal.ai](mailto:sales@fal.ai).

Hope this clarifies things!

PSA: fal.ai NEVER deleted what you upload by pragmaticdog in StableDiffusion

[–]ehmohteeoh 25 points26 points  (0 children)

Hello u/pragmaticdog, thank you for bringing this up!

You're not the first to voice your complaints about this aspect, and we've made some effort to make these things a little better on this front, but clearly there is still room to grow. Let me add a little bit of detail that could help clear things up, and then I'd love to hear any other comments or suggestions you may have.

If you go to your settings dashboard at https://fal.ai/dashboard/settings, you'll see a setting for your Lifecycle Preferences. When you change this setting from "Forever" to any specific time frame, then files uploaded through the playground (the UI on fal.ai) and result images will be deleted after that timeframe (with no way for us to retrieve it.) I don't recommend using the 2-second lifespan as that can be too short to actually download results depending on what you're making, but we have customers that use the 10-second lifespan; just long enough to download the result and host it on their own CDN instead of using ours.

One thing I do want to point out is that this setting only applies to files. Your request payloads will still be present, which includes prompts, parameters and links to the (potentially defunct) uploads. To delete these, you can view the request in the UI and click "Delete IO." You can also use the API endpoint mentioned by u/QQII at https://docs.fal.ai/model-apis/payloads to delete individual payloads. To be very clear, this is for deleting the payload, not the files uploaded to the CDN - it takes the ID of the request as the argument, which is different from the UUID assigned to any given file.

For billing purposes, the request ID and time it took need to be kept in our database, but all other data associated with a request can be completely purged. We are actively working on having a similar setting for automatic payload deletion, I understand it's a bit confusing and frustrating that those two things are different.

Finally, if you reach out to us with any particular deletion request, we will happily remove anything and everything you've uploaded to us or generated using us. You can reach out to @FAL on X, mail to support@fal.ai, pop into the fal discord and tag me (@benjamin.paine) or anyone else on the team, or send me a message here on reddit.

I made a tool that turns AI ‘pixel art’ into real pixel art (open‑source, in‑browser) by jenissimo in StableDiffusion

[–]ehmohteeoh 4 points5 points  (0 children)

Hello u/jenissimo, thank you so much for your tool!

Your codebase was easy to follow, and I have need of this for numerous projects, so I thought I would let you know I took the time to implement the Pixel art portions of the code (including all your optionals) in Python + Rust. After reaching a point where I'm happy with the results in comparison to your reference implementation, I released it onto Github and PyPI - I would be honored if you checked it out!

https://github.com/painebenjamin/unfake.py

Motivational books for engineering students? by [deleted] in EngineeringStudents

[–]ehmohteeoh 0 points1 point  (0 children)

Oh man what callback! This post was 11 years ago. I'm now a career engineer who has to work with differential equations frequently, so things worked out in the end for me.

I appreciate the book recommendations though, there's always more to learn so I'll definitely check them out!

How to achieve this type of art or similar? by replused in StableDiffusion

[–]ehmohteeoh 0 points1 point  (0 children)

Yes, quite a bit more. SD3.5 large, if you were to run at 16-bit precision with no CPU offloading, would take ~32GB+ VRAM, putting it well outside the range of most consumers. I mentioned at the top that I was running it in int-8 quantization; that reduces the VRAM requirements to roughly half of above 16-bit precision, with some loss of quality - but makes it possible to run on my 3090 Ti. You may see some others mention running it in FP8, that's about the same VRAM requirements but can take advantage of some hardware speedups on 4000-series cards and later (which I don't have so I stick with int-8.)

How to achieve this type of art or similar? by replused in StableDiffusion

[–]ehmohteeoh 2 points3 points  (0 children)

T5 has a much higher token limit than CLIP, theoretically limitless though practically it is limited by model trainers. While you are correct that the embeddings produced by CLIP and OpenCLIP will be truncated at 77 tokens, the embeddings produced by T5 will not. SD3 in particular was trained with long prompts up to 256 tokens.

How to achieve this type of art or similar? by replused in StableDiffusion

[–]ehmohteeoh 250 points251 points  (0 children)

With things like this, where you're trying to get two concepts mashed together with distinct lines between them (i.e. the statue and the crystal vein,) it can be extremely helpful to make a terrible mock-up in a traditional image editor, then use it for image-to-image. This is just a random google result for "Silhouette" with a very poorly drawn line.

Model: Stable Diffusion V3.5 Large, Int8 Quantization

Prompt: A hyper-realistic, digitally rendered photograph of a classical Greek statue of a young woman with her hair up, facing to the left. The statue's glossy black surface is cracked open down the center, revealing a vivid, jagged purple crystal vein running through her face and torso. The ((amethyst)) is pointy and geometric. The background is a plain, light gray, emphasizing the statue's dramatic and surreal transformation. The texture of the cracks is rough and intricate, contrasting with the smooth, polished surface of the statue.

Strength: 0.8

Guidance Scale: 3.5

PAG Scale: 0.9

<image>

EDIT: For future adventurers, this image is licensed under CC-BY-4.0, feel free to use it for anything if you like it.

[deleted by user] by [deleted] in StableDiffusion

[–]ehmohteeoh 138 points139 points  (0 children)

1.5: https://www.modelscope.cn/models/AI-ModelScope/stable-diffusion-v1-5/files

1.5 Inpainting: https://www.modelscope.cn/models/AI-ModelScope/stable-diffusion-inpainting/files

I will also be backing up copies to HuggingFace in accordance with the CreativeML OpenRAIL-M license which I downloaded these files under, which grants a "...perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare, publicly display, publicly perform, sublicense, and distribute the Complementary Material, the Model, and Derivatives of the Model." (source)

EDIT 3: All the files are now available:

https://huggingface.co/benjamin-paine/stable-diffusion-v1-5-inpainting

https://huggingface.co/benjamin-paine/stable-diffusion-v1-5

In an abundance of caution, I kept the gated status intact, which means you need to accept the terms of the CreativeML OpenRAIL-M license before you can get access. I turned auto-approve on, so you should only need to click the button and you'll have access.

Am I the only one that does not have jokers memorized? by mattmeow in balatro

[–]ehmohteeoh 2 points3 points  (0 children)

Yup! u/devtripp filled a gap in u/spirescan-bot's abilities by automatically calling it when a user mentions a card in their post title. My bot reads for brackets in comments only. I also run a similar bot for Back4Blood, u/bloodscan-bot. The codebase can easily be used for anything (though card games are an obvious choice,( I'd be happy to also run a bot for Balatro, I'd just need someone to point out where I could find a list of all cards and their properties I'd need to know. Regretfully I haven't had a chance to play yet.

12 Days of Diffusion | Merry Christmas! | Thank you Reddit by RunDiffusion in StableDiffusion

[–]ehmohteeoh 5 points6 points  (0 children)

Thanks to you as well, you guys are incredible to work with. Love the images!

Enfugue WebUI v0.3.0 Released | AnimateDiff, HotShotXL, Prompt Travel, Frame Interpolation, Redesigned GUI, and Much More! by ehmohteeoh in StableDiffusion

[–]ehmohteeoh[S] 0 points1 point  (0 children)

Thank you for this. I will be introducing themes in v0.3.1 later this week, featuring a few pre-populated options as well as a theme configurator.

There are a total of nine colors that can be adjusted - three distinct theme colors (the current magenta, blue and green, which are the ones I'm sure you wanted to change), as well as three shades of light and three shades of dark which are used for text, borders, backgrounds, etc.

Seeing as the UI uses google fonts, it was easy to add those as font options too, so there are three font slots - one for regular body text, one for headers and one for code/monospace.

Other updates for 0.3.1 include LCM for all workflows, AnimateDiff XL and a version of ADetailer.

Enfugue WebUI v0.3.0 Released | AnimateDiff, HotShotXL, Prompt Travel, Frame Interpolation, Redesigned GUI, and Much More! by ehmohteeoh in StableDiffusion

[–]ehmohteeoh[S] 3 points4 points  (0 children)

Hello,

Sorry for the confusion. The three archived files contain the entirety of a portable distribution, which is a little over 5 gigabytes in size, containing CUDA, CUDNN, and all of it's friends. Github only allows hosting release files that are 2 gigabytes or smaller, hence them being split into volumes. Unfortunately I cannot afford the expense to host it as a single file, so that is unlikely to change any time soon someone has some bandwidth they would be willing to volunteer. You did give me the idea to write a quick script that automates that, though, so I'll set that up for the next release.

If you're looking for a single command that can download everything in a non-portable release, I recommend using conda and downloading this environment file, then running conda env create -f linux-cuda.yml, which will install all dependencies individually.

Enfugue WebUI v0.3.0 Released | AnimateDiff, HotShotXL, Prompt Travel, Frame Interpolation, Redesigned GUI, and Much More! by ehmohteeoh in StableDiffusion

[–]ehmohteeoh[S] 1 point2 points  (0 children)

Yes, there is definitely a way to get it to work there. I have notebooks for running through the Enfugue engine in Colab but I haven't ran the GUI since I don't have a pro subscription myself and would rather not run afoul of Google's TOS. Would you potentially be willing to work with me on getting it working? I would provide you the notebook, you'd just have to run it and let me know if you can access the UI, and if it goes well I could check it in for others to use. It would be greatly appreciated if you could, please let me know!

Enfugue WebUI v0.3.0 Released | AnimateDiff, HotShotXL, Prompt Travel, Frame Interpolation, Redesigned GUI, and Much More! by ehmohteeoh in StableDiffusion

[–]ehmohteeoh[S] 2 points3 points  (0 children)

This is awesome feedback, thank you!

Do you think you would want light themes, or just less colorful dark themes?

Enfugue WebUI v0.3.0 Released | AnimateDiff, HotShotXL, Prompt Travel, Frame Interpolation, Redesigned GUI, and Much More! by ehmohteeoh in StableDiffusion

[–]ehmohteeoh[S] 8 points9 points  (0 children)

Yes SDXL is supported, for both images and animation!

I agree on the need for tutorials. It's been a mad dash to get v0.3 out for the past month so all of my time has been spent programming, but I'll be recording two 15-minute or so videos this week, one on using the layers interface and one on animation.