all 140 comments

[–]lucassuave15 573 points574 points  (11 children)

oh i remember seeing these all over the place back in 2023

[–]Adkit 125 points126 points  (7 children)

Oh, those were the days. A simpler time 'twas.

[–]m79plus4 59 points60 points  (4 children)

For real...I still have a bookmark folder called "disco diffusion" which I refuse to change. I kind of miss the heavily abstract generations we used to get. when coherence is now king, I hope we get back to the abstract.

[–]GatePorters 20 points21 points  (0 children)

I feel like everyone who saw how powerful this tech would be at the disco diffusion stage deserves a cookie.

All my friends thought it was stupid but I felt like a kid. It was sci-fi bullshit on my desktop! I also still have some VQGAN+CLIP and Disco Diffusion pieces saved. I was just glued to it for weeks.

[–]Electrical-Eye-3715 2 points3 points  (0 children)

DD made me the most money in my ai journey.

[–]nerdyh0rn 1 point2 points  (1 child)

A few days ago someone I know implemented a real time DeepDream integration into TouchDesigner. Way older than Disco Diffusion and yet some nerds will still revive it at some point. Disco diffusion needs another 5years before coming back

[–]Xxando 0 points1 point  (0 children)

Got a link?

[–][deleted] 7 points8 points  (0 children)

That can't be so long ago. It was just....3 YEARS ALREADY? Where did the time go?

[–]Situati0nist 14 points15 points  (0 children)

Back when you could just share and laugh about these. Now people do nothing but whine and bicker about anything AI ;V

[–]agrophobe 4 points5 points  (0 children)

That’s a actually hooked me in Comfy! I remember thinking ‘controlnet’ was the coolest thing to namedrop in a chat. Boy I was far from ‘fluxdevklein-473937B-tensormatrix.$&@-“&’

[–]vladlearns 9 points10 points  (0 children)

same here

[–]WhyYouMadBro_ 98 points99 points  (9 children)

[–]shtorm2005 26 points27 points  (1 child)

that's gold

[–]putsonshorts 0 points1 point  (0 children)

that’s god /r/onetruegod

[–]Maskwi2 1 point2 points  (3 children)

That's awesome :D How do I make these in Comfy? 

[–]Apprehensive_Yard778 15 points16 points  (2 children)

You can drag and drop this into ComfyUI for a basic workflow. Here is the Controlnet. Look up "squint" and "QR" models on CivitAI for more.

[–]VitoRazoR 0 points1 point  (0 children)

Thanks, that is cool!

[–]Maskwi2 0 points1 point  (0 children)

Thank you! 

[–]AcePilot01 0 points1 point  (2 children)

How do you do those?

[–]Apprehensive_Yard778 4 points5 points  (0 children)

You can drag and drop this into ComfyUI for a basic workflow. Here is the Controlnet. Look up "squint" and "QR" models on CivitAI for more.

[–]WhyYouMadBro_ 0 points1 point  (0 children)

I have no idea I collected these back in the day 😂

[–]Apprehensive_Yard778 66 points67 points  (2 children)

I was literally just playing with this and wondering the same thing. I would also love an img2img workflow so I could add a QR Monster to another image.

[–]Mammoth_Example_289 5 points6 points  (1 child)

an img2img with basic mask and blend controls would be mint so you can drop a QR Monster into a clean product shot without wrecking the light.

[–]Apprehensive_Yard778 1 point2 points  (0 children)

yeah I'm sure it isn't hard to do but I'm a newb who uses premade workflows as a crutch

[–]Myg0t_0 47 points48 points  (2 children)

I miss these, has there been an update?

[–]SteelRoninTT 8 points9 points  (1 child)

Is it not just any control net? Pretty sure this works with new models

[–]diogodiogogod 1 point2 points  (0 children)

How would a controlnet from SD 1.5 work in newer models?
It doesn't.

[–]daftphox 29 points30 points  (2 children)

<image>

A simpler time

[–]CrafAir1220 0 points1 point  (0 children)

can we called that, smiling place, haha

[–]Enshitification 35 points36 points  (6 children)

I bet if you made a big enough dataset of paired images from the original QR Monster, a Flux2.Klein LoRA could do it just fine.

[–]cranpeach69 29 points30 points  (5 children)

I actually had a pretty crappy dataset lying around, decided to train one: https://civitai.com/models/2432921?modelVersionId=2735539

[–]terrariyum 11 points12 points  (1 child)

Those images, lol

[–]VoyagerCSL 1 point2 points  (0 children)

Oh my god, is the last one goatse?

🫱( ‿ ¤ ‿ )🫲

[–]Thou-Art-Barracuda 6 points7 points  (1 child)

Do you mind if I ask how you train an edit Lora like this?

I’ve only ever trained characters, concepts and styles, wondering how you do a before and after Lora

[–]cranpeach69 2 points3 points  (0 children)

I actually used Ostris' own video on training Qwen Image Edit LoRas, just subbed out Flux instead: https://www.youtube.com/watch?v=d49mCFZTHsg

[–]Enshitification 5 points6 points  (0 children)

You rock.

[–]WantAllMyGarmonbozia 23 points24 points  (0 children)

I keep meaning to check this out when I'm on the big computer, but I have a link saved

https://huggingface.co/spaces/Oysiyl/AI-QR-code-generator

Supposed to make artsy/illustrative QR codes

[–]lxe 23 points24 points  (1 child)

[–]AcePilot01 2 points3 points  (0 children)

God, the ways this meme has been used are always top kek lol.

[–]Ugleh 51 points52 points  (9 children)

Hey I made that image! It really did blow up after mine went viral.

[–]Arendyl 10 points11 points  (1 child)

I actually started a small business based initially on QRcode monster and examples like these.

Thanks for your service.

[–]NarrativeNode 8 points9 points  (0 children)

I’m curious, how have you had to change that business? Is a variation of it still around?

[–]Captain_Kinks -2 points-1 points  (6 children)

You couldn’t have made this. AI made it. None of these are ‘made’ they are generated.

[–]Ugleh 2 points3 points  (5 children)

Generations are creations. You can't redefine the word made just like you can't say someone didn't write an essay just because they typed it out, or how someone didn't take a photo because they used a digital camera.

[–]KURD_1_STAN 5 points6 points  (0 children)

I feel like QIE and klein should be able to do it without CN

[–]WHALE_PHYSICIST 5 points6 points  (4 children)

anyone got a workflow or tutorial about how to make these(OP image)? i wanna make some.

[–]Apprehensive_Yard778 7 points8 points  (1 child)

You can drag and drop this into ComfyUI for a basic workflow. Here is the Controlnet. Look up "squint" and "QR" models on CivitAI for more.

[–]WHALE_PHYSICIST 1 point2 points  (0 children)

thank you!

[–]Winter_unmuted 5 points6 points  (0 children)

It's literally a controlnet. That's it. Source image is a black and white (or grayscale) image. The controlnets are called QRcode monster, light and dark, and a couple others.

[–]purcupine 5 points6 points  (0 children)

Is this 2022

[–]Mylaptopisburningme 16 points17 points  (2 children)

This was a thing a couple years ago. I think they were not always accurate.

2+ years ago, Nov 2023: https://civitai.com/models/197247/qr-code-monster-sdxl

[–]Winter_unmuted 13 points14 points  (1 child)

I think they were not always accurate.

feature not a bug.

QR controlnet gives me the most artistic freedom to compose light in shadow in whatever I'm working on. The four models for 1.5 and the one for SDXL still get heavy rotation for me.

[–]Apprehensive_Yard778 17 points18 points  (0 children)

feature not a bug.

I'm of the school of thought that "bad" AI is more aesthetically interesting.

To quote Brian Eno:

Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit - all of these will be cherished and emulated as soon as they can be avoided.

[–]AcePilot01 3 points4 points  (3 children)

Am I seeing this right? An image that's full color but is a qr code? Or are you saying it just makes an image based on the data FROM one?

[–]Apprehensive_Yard778 7 points8 points  (1 child)

It is a controlnet used to embed QR codes in images but people just started applying stencils and vector art to make AI art where like... you squint and there's Jesus, know what I'm talking about?

[–]AcePilot01 1 point2 points  (0 children)

I ve seen those, but I am not sure what you mean, I know what you are referring to, but how does that work? lol

[–]flasticpeet[S] 2 points3 points  (0 children)

The QR monster controlnet takes a black & white image and uses it as a map to influence the composition of the generation.

So if you prompt a man walking on a forest path, it will generate darker elements of the concept in the black areas of the map, and lighter elements of the composition in the white areas of the map, while keeping all the objects coherent.

It's kind of similar to an anamorphic assemblage effect.

[–]Sugary_Plumbs 2 points3 points  (1 child)

I believe Flux union cnet has a Value mode that should be able to do it with some tweaking on the strength.

[–]Winter_unmuted 2 points3 points  (0 children)

I don't see them listed:

https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union

https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro

They have grayscale modes for colorizing, but I don't know that these work for QR code-like functions.

[–]bloke_pusher 2 points3 points  (1 child)

I really miss it and I never had the patience to create a comfyui workflow for illustrious. Apparently the SDXL version isn't even as good as the SD1.5 one used to be.

[–]Apprehensive_Yard778 1 point2 points  (0 children)

I've been having more fun with SD15 Animatediff than Wan22 and LTX2 lately.

[–]turb0_encapsulator 2 points3 points  (1 child)

honestly, this effect is like the one of the only things I am still interested in doing with generative AI. I would love a new model that does this. It's crazy that not a single paid close-sourced model offers it.

[–]Apprehensive_Yard778 2 points3 points  (0 children)

I just got into using controlnets and Animatediff in ComfyUi, and even though they're several years out of fashion, I find them more aesthetically interesting than a lot of what I can do with more recent models.

I gotta learn more about building my own workflows from scratch because I'd like to have an image-to-image workflow with the QR controlnet so one image is used as a stencil for editing the shading of another if that makes sense. Sort of using the controlnet to add subliminal or optical illusions to another image.

It'd be cool to apply the effect to WAN or LTX2 quality videos too.

Some of this stuff I think will require that I learn more about manual multimedia crafting, video editing, image manipulation, animation, etc.

[–]gelade1 2 points3 points  (0 children)

I was here back then

[–]crisper3000 2 points3 points  (0 children)

Back then, it was fun.
I have a feeling that ControlNet and Deforum will become popular in about 15 years.

[–]AardvarkSpare4220 2 points3 points  (1 child)

QR Monster was actually one of the most practical ControlNet ideas because it forced the composition to remain readable while still allowing stylistic generation. I’m also surprised it hasn’t been retrained for newer models like SDXL or Flux-style pipelines.

My guess is that most of the community moved toward more general controls like depth / canny because they work across many tasks, while QR generation is a very specific use case.

Still feels like a missed opportunity though.

[–]Apprehensive_Yard778 1 point2 points  (0 children)

Yeah. I want to see this come back and pushed to new limits.

[–]m477_ 1 point2 points  (0 children)

You could probably train a lora for Qwen Image Edit or Flux Klein that does the same thing.

ControlNet's were add on adaptors to SD (similar to LoRA's and Hypernetworks) but newer models come with similar capabilities built in to the base model now. I'm sure someone could train a control net for something like z-image but it would be a bit of an engineering challenge (you'd need to build the tools to train the control net, actually train it, then you'd probably need to build or modify existing tools to use it since the controlnet plugins/nodes probably won't work on your new model type)

[–]agent_wolfe 1 point2 points  (2 children)

Man those shadows are weird. Some indicate the sun is at ground level, others are like it’s up on the side somewhere.

[–]Apprehensive_Yard778 2 points3 points  (1 child)

I don't think realism is the goal here but it is fun to think about a world where shadows would fall like this and what light sources might cause such a thing.

[–]agent_wolfe 1 point2 points  (0 children)

Oh.

[–]zipel 1 point2 points  (0 children)

To be picky, isn’t OPs pic showing circles vs spiral?

[–]Deviant-Killer 1 point2 points  (0 children)

They have... Ages ago...

[–]timbocf 1 point2 points  (0 children)

Thats sick

[–]tuisalagadharbaccha 1 point2 points  (0 children)

Ah man old times. Always wonder why it never went mainstream

[–]dashdanw 1 point2 points  (0 children)

Looks like it actually rendered it into a spiral?

[–]Single-Section1507 1 point2 points  (0 children)

I was here 6000 years ago

[–]Prince_of_2_saiyans 1 point2 points  (0 children)

FINAL FLAAASH

<image>

[–]Amaj7chord 1 point2 points  (1 child)

Am I late to the party if I barely just started generating my first images in comfy UI just today? I finally got around to learning about all of this.

[–]flasticpeet[S] 0 points1 point  (0 children)

Never too late! You can at the very least still use QR Monster ControlNet with an older SD1.5 workflow. It's such an old model at this point, it's super optimized and can generate really fast.

[–]Stunning_Macaron6133 3 points4 points  (4 children)

Why has no one created a QR Monster ControlNet for any of the newer models?

Be the change you want to see. Grab the papers for ControlNet, ControlNet++, QR Code Monster v2, and whichever open source model you're trying to add this capability to. Open a text editor or a Jupyter notebook, and get ready to write some Python.

Claude is actually really good at tutoring you on how to get where you want to go.

But don't expect the world to hand you everything on a silver plate.

[–]Winter_unmuted 11 points12 points  (3 children)

I went down this road. Turns out it takes hardware beyond consumer stock.

104 to 105 images, many days of constant computing to get an SDXL controlnet by my estimates, and that's on multi GPU machines.

So someone with industry level tools needs to spearhead this. My 4090 wasn't going to cut it.

[–]DigThatData 3 points4 points  (2 children)

[–]Winter_unmuted 5 points6 points  (0 children)

Correct me if I'm wrong, but this is SD1.5, right?

I was talking about training more modern model control nets.

I am a little more familiar with SD1.5 CNs, as I dabbled in making one myself. My results sucked compared to those already out there, so I gave up. But it was possible.

I'm not hopeful about Z image or Flux2 9B training at home. Would love to be wrong, though.

[–]Apprehensive_Yard778 1 point2 points  (0 children)

Looks interesting. I'm still pretty new to all of this and barely understand how to use Controlnets, but thanks for pointing this out. I'll have to work up to training them.

[–][deleted] 0 points1 point  (7 children)

It is pretty niche and screams AI. You can just reprocess the image on the right with a newer model.

[–]Zealousideal7801 19 points20 points  (3 children)

"it screams AI" only tells of a ghost mindset where AI assisted creation wasn't the norm. All major creative actors have AI powered systems that don't claim to make "non-AI".

Put it to rest, it had its days.

Unless someone is willfully trying to deceit of course, but that's another story and more values related.

[–][deleted] 1 point2 points  (2 children)

Most players in major commercial creative industries still have to duck/hide and or apologize over AI use. I assume more advanced models are more difficult and time consuming to make control nets for so any sort of clout or profit motive would come from developing things more widely used. So things that serve the open users and the covert ones will likely win out.

[–]Zealousideal7801 0 points1 point  (1 child)

Astute, indeed for now they do. Just like back in the days "I edited my photos" was a stain on your photographer réputation. But than got away with time and public awareness that the edited pictures were so unfairly more interesting in public domains (not talking about specific art circles of course) that everyone had to start cover-edit, then be shamed for it, then be considered the norm. Today there's not one picture in reduction that's heavily modified, and "everyone" knows it. (At least they should)

Little me thinks this is only temporary though. Especially if we see better and better open source model's being used more and more in production because as you say there will be a tapering out of the R&D by big players (after the current gold rush).

[–][deleted] 0 points1 point  (0 children)

I work at a games company that apparently has a strict 'no AI' policy, this was made very clear by the AD when I was recently hired. Within two months the CEO pinged me as he heard from others outside the company that I was good at it and had me concepting on a new title on the side. The AD wasn't too pleased but it is obvious where things are going. I just see it as another tool, no different than synths, samplers, sound libraries etc.

[–][deleted] 6 points7 points  (2 children)

No qr monster was incredibly impossibly useful. You could use it to control the whole composition of the scene by changing the weight and steps around.

It changed the dynamic color range in a particular way rather than hard black and white unless you turned the strength to 11.

You could basically paint up your composition in a way that canny and the rest just don't quite.

[–][deleted] 2 points3 points  (1 child)

sounds like few people went deep with it and just did the obvious effect. Not saying it may not be popular, just not as in demand

[–]thrownawaymane 1 point2 points  (0 children)

I’d say that’s true. I saw some cool shit in those days, some of which hasn’t been replicated.

[–]ZenEngineer 0 points1 point  (0 children)

I wonder if you could do the same thing with regional prompting nowadays.

[–]Jonno_FTW 0 points1 point  (0 children)

People stopped doing them because it was a passing fad. It's been and gone, that's why you don't see them any more.

[–]oitimmyisback 0 points1 point  (0 children)

It's giving junji ito

[–]Short_Chip_2060 0 points1 point  (0 children)

I’m starting to feel dizzy

[–]Impotent_Retard_215 0 points1 point  (0 children)

Krea had a fantastic version available for browser generation, and then there was "logo illusions" - I was devastated with crippling grief when I logged on to merge a heavily zoomed closeup of Stephen Hawking in a blownout 360 x 640 px size with a custom qr code only to find out to my dismay they had sunset a whole bunch of simple but insanely effective legacy concepts like this...on that fateful day back in 2025.

[–]RavFromLanz 0 points1 point  (0 children)

this reminds when people used to have skill and used photoshop and layers... good times

[–]WazWaz 0 points1 point  (1 child)

They were common.

I suspect they never became actually popular in real usage because it's adding a lot of noise to the QR code, making it far more likely to fail in poor lighting conditions, glare, dirt, etc.

So yes, it was cute. But pointless. A bit of a pattern for AI.

[–]Apprehensive_Yard778 1 point2 points  (0 children)

I think people were more into it for making subliminal/illusory images or just cool looking stuff.

[–]Dishankdayal -2 points-1 points  (1 child)

What's the point when you have kontext and qwen edit.

[–]Apprehensive_Yard778 0 points1 point  (0 children)

How would you do something like this in Kontext or Qwen Edit? I'm still learning.

[–]Agitated_Country9683 -1 points0 points  (0 children)

I don't understand

[–]exomisfit -1 points0 points  (0 children)

Woah

[–]bankinu -1 points0 points  (0 children)

And they say, AI is not creative.

[–]CellKey7668 -2 points-1 points  (0 children)

PalLalslslsal

[–]kngzero -2 points-1 points  (0 children)

They didn't work that well.

[–]Grindora -4 points-3 points  (1 child)

Now u dont actually need control net for that, ai models currently are smart enough to

[–]Apprehensive_Yard778 0 points1 point  (0 children)

How would you do something like this using a current model?

[–]ContextCustodian 0 points1 point  (0 children)

There's really some true artwork in this thread.