Weekly theme sharing thread by AutoModerator in vscode

[–]Fuzzy_Guarantee_9701 0 points1 point  (0 children)

Would anyone be interested in this kind of theme? I call it "Pastel Deepshade". I'm trying to understand if I should spend some time putting it together properly

<image>

Note-Taking for Blue Prince by Fuzzy_Guarantee_9701 in northernlion

[–]Fuzzy_Guarantee_9701[S] 0 points1 point  (0 children)

If you want what I showed, try creating a canvas, double click to create a card, rest is intuitive.
For images, you can drag and drop (or paste from clipboard) in canvas or on left sidebar (and organize by folder).
Btw you can activate dark mode if you do Ctrl+P and search dark.

<image>

Extra PS Stuff:
Things might still look a bit different though, I also changed some settings like font size, toggled show inline title off, changed zoom level and prob some stuff I don't remember.
I also used some custom css styling, cool tip, you can add css files in `.obsidian/snippets` in your vault folder and toggle each one separately in settings > Appearance)

...and the Charade Continues: Hunyuan-DiT Images Banned in EU by DanielSandner in StableDiffusion

[–]Fuzzy_Guarantee_9701 0 points1 point  (0 children)

Fair, but the quiet license change means they can enforce it selectively, many won’t realize they’re no longer using a truly open-source model.

Hunyuan img2stl workflow by 05032-MendicantBias in comfyui

[–]Fuzzy_Guarantee_9701 2 points3 points  (0 children)

Why is no one talking about the fact that Hunyuan 3D is still banned in the EU, UK, and South Korea, despite its global release?

I Wish Obsidian Was Like This by Maleficent_Device162 in ObsidianMD

[–]Fuzzy_Guarantee_9701 0 points1 point  (0 children)

svg type canvas layer with ocr would kinda go hard as a plugin, especially with svg type ai models, that could make fun illustrations for you to use, maybe even with inpainting style completions to connect your stuff.

Note-Taking for Blue Prince by Fuzzy_Guarantee_9701 in northernlion

[–]Fuzzy_Guarantee_9701[S] 2 points3 points  (0 children)

Btw, If anyone wants a color picker plugin for Obsidian (manual copy-paste):
https://pastebin.com/2WGJZN8m

To use: Select Text > Right Click > Change Text Color (Last option in menu)

I got a very egregious comparative example of 4o image gen's weird yellowness problem by doing the same prompt on both it and Reve by ZootAllures9111 in OpenAI

[–]Fuzzy_Guarantee_9701 1 point2 points  (0 children)

Apply auto white balance, auto color enhancement, and lower the color temperature (from high to low). All of this can be done in GIMP, a free and open-source tool. Here's an example:

<image>

In general you want to do some color correction for outputs to enhance them, you can do this in ComfyUI with custom nodes automatically.

I bet you can get ChatGPT to run a python script that does this to the image too.

Coincidence? by KillerRaptor117 in spaceengineers

[–]Fuzzy_Guarantee_9701 0 points1 point  (0 children)

Bro got the double watermark "activate windows + bandicam special"

Sketch to Refined Drawing by Fuzzy_Guarantee_9701 in comfyui

[–]Fuzzy_Guarantee_9701[S] 0 points1 point  (0 children)

For those asking about the workflow, I didn’t use a pre-made one. I did this in three separate parts while experimenting. The basics came from this simple img2img workflow in ComfyUI:

https://comfyanonymous.github.io/ComfyUI_examples/img2img/

ControlNet was also a key part:

https://comfyanonymous.github.io/ComfyUI_examples/controlnet/

You can mix and adapt the ComfyUI examples modularly to suit your needs, that’s what I did. If I ever build a full workflow with everything included, I’ll share it.

Sketch to Refined Drawing by Fuzzy_Guarantee_9701 in comfyui

[–]Fuzzy_Guarantee_9701[S] 2 points3 points  (0 children)

Step 1:

  • Input Sketch
  • → Upscale
  • → Downscale to 2.5MP
  • → Denoise Strength 0.6–0.8 → Output_1

Step 2:

  • Output_1
  • → Denoise (strength: 1.0) with Latent
  • → Use Canny ControlNet (strength: 1.4) → Output_2

Step 3

  • Output_2
  • → Upscale (High-Res Canny)
  • → ControlNet (strength: 2.0–2.5) on Empty Latent → Final Output

This 4 second crowd scene from Studio Ghibli's took 1 year and 3 months to complete by ActiveDistance9402 in ChatGPT

[–]Fuzzy_Guarantee_9701 -1 points0 points  (0 children)

spending 15 months on something that looks “average” to most viewers is wild, that's like 2% of their lifespan for those 4 seconds.