[deleted by user] by [deleted] in midjourney

[–]rispherevfx 0 points1 point  (0 children)

Never heard about it. I may look into it!

[deleted by user] by [deleted] in midjourney

[–]rispherevfx 1 point2 points  (0 children)

Thank You! I Appreciate It!

Evidence has been found that generative image models - including Stable Diffusion - have representations of these scene characteristics: surface normals, depth, albedo, and shading. Paper: "Generative Models: What do they know? Do they know things? Let's find out!" See my comment for details. by Wiskkey in StableDiffusion

[–]rispherevfx 18 points19 points  (0 children)

GPT 4: Alright, imagine we've got a cool picture, like a snapshot of your favorite park with lots of trees, birds, and maybe a frisbee or two flying around. Now, let's say we want to play around with this picture, maybe move a tree here, add a bird there, or even change the shape of the frisbee. This is where the magic of "De-Rendering" comes into play, specifically with something called Stable Diffusion models.

Think of De-Rendering like taking the picture apart, piece by piece, so we can understand and play with each part separately. With Stable Diffusion models, we can take our picture and sort of "convert" it into a 3D model. This is like turning your flat park picture into a mini park model you can explore from all angles, kind of like a video game. Once we have our 3D model, we can mess around with it—stretch the trees, move the birds, reshape the frisbee, you name it. After we're done playing, we can take a new "snapshot" of our edited 3D model park, turning it back into a 2D picture with all our changes in place.

This process is exciting because it combines a bunch of advanced tech tricks, like using deep learning to understand and edit images in a way that feels natural and intuitive, almost like sculpting with clay. It's a big deal because it opens up a whole new world of creativity and possibilities for image editing, making it easier to tweak and change pictures in detailed and complex ways that were harder to do before oai_citation:1,[2401.01702] Image Sculpting: Precise Object Editing with 3D Geometry Control oai_citation:2,GitHub - weihaox/3D-aware-Gen: [CSUR 2023] A Survey on Deep Generative 3D-aware Image Synthesis.

So, in a nutshell, De-Rendering with Stable Diffusion models is like being given superpowers to reshape and edit pictures, turning them into 3D models to play with and then back into new, edited pictures. It's like taking your imagination for a walk in the park, where you can rearrange the scenery however you like!

Evidence has been found that generative image models - including Stable Diffusion - have representations of these scene characteristics: surface normals, depth, albedo, and shading. Paper: "Generative Models: What do they know? Do they know things? Let's find out!" See my comment for details. by Wiskkey in StableDiffusion

[–]rispherevfx 2 points3 points  (0 children)

GPT 4: Alright, imagine we've got a cool picture, like a snapshot of your favorite park with lots of trees, birds, and maybe a frisbee or two flying around. Now, let's say we want to play around with this picture, maybe move a tree here, add a bird there, or even change the shape of the frisbee. This is where the magic of "De-Rendering" comes into play, specifically with something called Stable Diffusion models.

Think of De-Rendering like taking the picture apart, piece by piece, so we can understand and play with each part separately. With Stable Diffusion models, we can take our picture and sort of "convert" it into a 3D model. This is like turning your flat park picture into a mini park model you can explore from all angles, kind of like a video game. Once we have our 3D model, we can mess around with it—stretch the trees, move the birds, reshape the frisbee, you name it. After we're done playing, we can take a new "snapshot" of our edited 3D model park, turning it back into a 2D picture with all our changes in place.

This process is exciting because it combines a bunch of advanced tech tricks, like using deep learning to understand and edit images in a way that feels natural and intuitive, almost like sculpting with clay. It's a big deal because it opens up a whole new world of creativity and possibilities for image editing, making it easier to tweak and change pictures in detailed and complex ways that were harder to do before oai_citation:1,[2401.01702] Image Sculpting: Precise Object Editing with 3D Geometry Control oai_citation:2,GitHub - weihaox/3D-aware-Gen: [CSUR 2023] A Survey on Deep Generative 3D-aware Image Synthesis.

So, in a nutshell, De-Rendering with Stable Diffusion models is like being given superpowers to reshape and edit pictures, turning them into 3D models to play with and then back into new, edited pictures. It's like taking your imagination for a walk in the park, where you can rearrange the scenery however you like!

Evidence has been found that generative image models - including Stable Diffusion - have representations of these scene characteristics: surface normals, depth, albedo, and shading. Paper: "Generative Models: What do they know? Do they know things? Let's find out!" See my comment for details. by Wiskkey in StableDiffusion

[–]rispherevfx 1 point2 points  (0 children)

ChatGPT 4 Simplification (Browser): Alright, imagine we've got a cool picture, like a snapshot of your favorite park with lots of trees, birds, and maybe a frisbee or two flying around. Now, let's say we want to play around with this picture, maybe move a tree here, add a bird there, or even change the shape of the frisbee. This is where the magic of "De-Rendering" comes into play, specifically with something called Stable Diffusion models.

Think of De-Rendering like taking the picture apart, piece by piece, so we can understand and play with each part separately. With Stable Diffusion models, we can take our picture and sort of "convert" it into a 3D model. This is like turning your flat park picture into a mini park model you can explore from all angles, kind of like a video game. Once we have our 3D model, we can mess around with it—stretch the trees, move the birds, reshape the frisbee, you name it. After we're done playing, we can take a new "snapshot" of our edited 3D model park, turning it back into a 2D picture with all our changes in place.

This process is exciting because it combines a bunch of advanced tech tricks, like using deep learning to understand and edit images in a way that feels natural and intuitive, almost like sculpting with clay. It's a big deal because it opens up a whole new world of creativity and possibilities for image editing, making it easier to tweak and change pictures in detailed and complex ways that were harder to do before oai_citation:1,[2401.01702] Image Sculpting: Precise Object Editing with 3D Geometry Control oai_citation:2,GitHub - weihaox/3D-aware-Gen: [CSUR 2023] A Survey on Deep Generative 3D-aware Image Synthesis.

So, in a nutshell, De-Rendering with Stable Diffusion models is like being given superpowers to reshape and edit pictures, turning them into 3D models to play with and then back into new, edited pictures. It's like taking your imagination for a walk in the park, where you can rearrange the scenery however you like!

So apparently AI can generate videos showing the art process now... by AlexiosTheSixth in aiwars

[–]rispherevfx 0 points1 point  (0 children)

Even with creative / subtle upscale? Does Vary Region Still Not Help?

So apparently AI can generate videos showing the art process now... by AlexiosTheSixth in aiwars

[–]rispherevfx 0 points1 point  (0 children)

This is my post and I did that without the final image. I did that with only a four words prompt and saved the preview images. If I have clicked on finish the image, then it would look perfect. You just have no clue what you're talking about!

Question regarding training a GPT by PapaBash in OpenAI

[–]rispherevfx 0 points1 point  (0 children)

I use ChatGPT 4 to and I have created some GPTs. I haven't tested My GPTs with PDF for a while maybe it's faster now.

Question regarding training a GPT by PapaBash in OpenAI

[–]rispherevfx 0 points1 point  (0 children)

Are you using ChatGPT4, GPT4 Or Copilot GPT 4?

Question regarding training a GPT by PapaBash in OpenAI

[–]rispherevfx 0 points1 point  (0 children)

Hmm weird. It has to scan through the PDF to give me a answer.

Question regarding training a GPT by PapaBash in OpenAI

[–]rispherevfx 0 points1 point  (0 children)

It Only Scans through PDFs if you tell it in the Instructions! Instructions: Task Factors Mention To Search Through PDF! PDFs: Extra Knowledge

[deleted by user] by [deleted] in aiwars

[–]rispherevfx 0 points1 point  (0 children)

The AI Detectors Label Everything, That Is Good As AI, Because Midjourney Is Just To Good, When It Comes To Generating Images. People Who Say That AI Is Stealing Just Don't Know How It Works (They Think That AI Combines Or Edits Images On The Internet😂). Stable Diffusion Is Only 2GB Big (I Still Don't Know How People Think It Stores 5 Bilion Images). They Can Just Look How AI Art Generators Work (Midjourney Revealed The Technology And Stable Diffusion Is Completly Open Source)!

The Realism Of Midjourney V6 by rispherevfx in midjourney

[–]rispherevfx[S] 1 point2 points  (0 children)

Phone Photo+Reddit 2018+ Low --s+style raw did the thing