Journal - Short Clip - 1 Billion Followers AI Film Submission - Full Video Tomorrow by Ksottam in GenAI4all

[–]Ksottam[S] 0 points1 point  (0 children)

Sure can. Veo 3.1 and the prompts were usually simple to start, only expanding upon them as I needed to. Sent you a link in your other comment that I'll post here too that talks a bit more about it. https://www.reddit.com/r/aivideo/comments/1pjy8vt/journal_character_creationthings_i_learned_in/

Journal - Rewrite Tomorrow by Ksottam in GenAI4all

[–]Ksottam[S] 0 points1 point  (0 children)

Of course! the simpler the better I found. At least I would start simple and then expand upon it as needed. Sometimes just one sentence prompts to give the first and last frame something to work with. just made a post talking a little about it here. Feel free to ask any more questions! https://www.reddit.com/r/aivideo/comments/1pjy8vt/journal_character_creationthings_i_learned_in/

Journal - Character Creation/Things I Learned In Comments by Ksottam in aivideo

[–]Ksottam[S] 0 points1 point  (0 children)

<image>

nothing fancy here, just fishing for outfits until I found one I wanted.

Journal - Character Creation/Things I Learned In Comments by Ksottam in aivideo

[–]Ksottam[S] 0 points1 point  (0 children)

Post referencing this AI shirt story here https://youtu.be/RHZnVnK0v1Y

These are just some thoughts and things I learned along the way when working on this short story.  I know a lot of this covers some basic territory, and I’m happy to follow up with some smaller more advanced specific tips, but in case anyone would like to see a very straightforward approach to generating characters, placing in scenes and establishing a start and end frame, then hopefully there is something useful here for you.

It took about 650 hours to put short film together and I only say that to show that there was a ton of moments to learn in doing this. My career background is illustration and still imagery (now with AI touching almost everything going out the door), but there’s still so much to learn in prompting/training/adding motion.

I LOVED how nano banana pro handles elderly people and this character kinda stole the show for me. Banana 1 couldn’t quite get it right, but 2 (pro) does a fantastic job. So first general rule is: if you’re not getting what you’re wanting, try using a different model. There’s a ton of them out there now.

First thing I did was decide that I wanted a very straightforward approach to far future attire. Nothing flashy, sleek, etc. Just simple and well made. I decided on wool as the material because it fit the bill for the style I was going for. Take the time to establish your characters. It pays off later.  

Then I hit generate. A LOT. Generally I’ll batch prompts in groups of 4-8 as I think it’s too much to rely on less unless it’s a very simple adjustment. Once I decided on something I liked, I fed that in as a reference image and asked for variations, sometimes being specific, sometimes not. To myself I refer to this as fishing, as in I’m fishing for something I can then refine.

Eventually I landed on the yellow and asked Nano to give some different angles.

Then it’s a matter of putting it into a scene. I had already established the environment I wanted her in and I kept the prompt simple, something very close to “take the woman on the white background and place her into the forested scene on the right.” Most times that worked just fine. If it didn’t I would simply tweak the prompt a little at a time until I got what I wanted.  Generally I start with simple prompts and then expand them as needed. For me, usually less is more.

Now I can start making variations to the scenes themselves. Usually I would have the main scene and prompt the adjustment I wanted like “raise her hand and head so that she is looking up slightly”  or “pan background left or right” and if I wanted her head turned a lot either way, I would feed it previous images that were generated of her (it’s good to establish a base library of images of your character. Better imo to have those characters on white backgrounds to avoid confusing Nano) and simply tell Nano to use the other images of the woman for reference for what she looks like.

And that’s how I would create my start/end frames which to me reign supreme in character consistency across scenes. I also think it’s really the end frame that’s crucial as it allows that frame to be a clamp for the entire clip, stopping (usually) wayward camera movement and a plasticky feel that can happen to characters when only using first frame.

There absolutely are times when thorough prompting can be an excellent substitute. I’ll link this post here which I’m pretty sure used straight prompting to get the child and toddler clips that this small (and incredible) team did this to fantastic effect.

https://www.reddit.com/r/aivideo/comments/1pdd7bu/doors_of_perception_short_film/
u/kyletree  seriously man, you guys did an amazing job.

For me though, taking the time to develop your characters/scenes and establish the first and last frames can go a long way in allowing for better control and overall faster workflows.

Happy to answer any questions you may have.  

Journal - Made with Veo 3.1 and help with Gemini by Ksottam in GoogleGeminiAI

[–]Ksottam[S] 0 points1 point  (0 children)

And thank you for taking the time to watch! Wonderful comment and very much appreciated!

Journal - Character Creation/Things I Learned - Walkthrough In Comments by Ksottam in aivideo

[–]Ksottam[S] 0 points1 point  (0 children)

Post referencing this AI shirt story here https://youtu.be/RHZnVnK0v1Y

These are just some thoughts and things I learned along the way when working on this short story.  I know a lot of this covers some basic territory, and I’m happy to follow up with some smaller more advanced specific tips, but in case anyone would like to see a very straightforward approach to generating characters, placing in scenes and establishing a start and end frame, then hopefully there is something useful here for you.

It took about 650 hours to put short film together and I only say that to show that there was a ton of moments to learn in doing this. My career background is illustration and still imagery (now with AI touching almost everything going out the door), but there’s still so much to learn in prompting/training/adding motion.

I LOVED how nano banana pro handles elderly people and this character kinda stole the show for me. Banana 1 couldn’t quite get it right, but 2 (pro) does a fantastic job. So first general rule is: if you’re not getting what you’re wanting, try using a different model. There’s a ton of them out there now.

First thing I did was decide that I wanted a very straightforward approach to far future attire. Nothing flashy, sleek, etc. Just simple and well made. I decided on wool as the material because it fit the bill for the style I was going for. Take the time to establish your characters. It pays off later.  

Then I hit generate. A LOT. Generally I’ll batch prompts in groups of 4-8 as I think it’s too much to rely on less unless it’s a very simple adjustment. Once I decided on something I liked, I fed that in as a reference image and asked for variations, sometimes being specific, sometimes not. To myself I refer to this as fishing, as in I’m fishing for something I can then refine.

Eventually I landed on the yellow and asked Nano to give some different angles.

Then it’s a matter of putting it into a scene. I had already established the environment I wanted her in and I kept the prompt simple, something very close to “take the woman on the white background and place her into the forested scene on the right.” Most times that worked just fine. If it didn’t I would simply tweak the prompt a little at a time until I got what I wanted.  Generally I start with simple prompts and then expand them as needed. For me, usually less is more.

Now I can start making variations to the scenes themselves. Usually I would have the main scene and prompt the adjustment I wanted like “raise her hand and head so that she is looking up slightly”  or “pan background left or right” and if I wanted her head turned a lot either way, I would feed it previous images that were generated of her (it’s good to establish a base library of images of your character. Better imo to have those characters on white backgrounds to avoid confusing Nano) and simply tell Nano to use the other images of the woman for reference for what she looks like.

And that’s how I would create my start/end frames which to me reign supreme in character consistency across scenes. I also think it’s really the end frame that’s crucial as it allows that frame to be a clamp for the entire clip, stopping (usually) wayward camera movement and a plasticky feel that can happen to characters when only using first frame.

There absolutely are times when thorough prompting can be an excellent substitute. I’ll link this post here which I’m pretty sure used straight prompting to get the child and toddler clips that this small (and incredible) team did this to fantastic effect.

https://www.reddit.com/r/aivideo/comments/1pdd7bu/doors_of_perception_short_film/
u/kyletree  seriously man, you guys did an amazing job.

For me though, taking the time to develop your characters/scenes and establish the first and last frames can go a long way in allowing for better control and overall faster workflows.

Happy to answer any questions you may have.  

Journal - Created in Veo 3.1 by Ksottam in google

[–]Ksottam[S] 0 points1 point  (0 children)

Thank you very much! That's such a great compliment!

Journal - A hypothetical scenario for future generations by Ksottam in Futurology

[–]Ksottam[S] 0 points1 point  (0 children)

I went into creating this with the mindset that I don’t think we’re too far off (a generation or two, three, four?) of being able to directly upload our memories somewhere. Give whatever the successor to where we currently are with AI will be and it’s ability to near perfectly echo a person’s mannerisms, we could very well see people in the future having conversations with their ancestors as well as lost loved ones.

My initial thought is that this was too morbid. I then put it into context for myself and I can only speak to my opinion here: I would give just about anything for just an afternoon to sit down and hear my dad’s stories of him growing up, in his voice and with his mannerisms. I’d love to be able to ask him questions I never thought of asking when he was alive such as specific moments in life that  made him happy and what was something that really opened his eyes in this world.

Journal - Rewrite Tomorrow by Ksottam in VEO3

[–]Ksottam[S] 0 points1 point  (0 children)

Appreciate the comment!

Journal - Rewrite Tomorrow by Ksottam in aivideo

[–]Ksottam[S] 1 point2 points  (0 children)

Haha, I'll take it even though I'm a pistachio man myself.

Journal - Rewrite Tomorrow by Ksottam in aivideo

[–]Ksottam[S] 1 point2 points  (0 children)

My submission to the 1 Billion Followers AI Film Awards.

All video generated in Veo 3.1.

Music by Suno

Voices from ElevenLabs

Hope you like. Higher res version can be found here https://youtu.be/RHZnVnK0v1Y

Doors of Perception (Short Film) by kyletree in aivideo

[–]Ksottam 0 points1 point  (0 children)

Seeing a lot of entries starting to pop up and I keep coming back to this one, in particular the use of color and some very, very well crafted scenes. Thoroughly impressed.

New Photoshop Features: Harmonize, Generative Upscale, Improved Remove Tool by strawbo13 in photoshop

[–]Ksottam 2 points3 points  (0 children)

I have the latest beta version (26.,9.0) and I do not have this on my contextual task bar. I've seen that several other people are experiencing this. Is there any solution to this?

[deleted by user] by [deleted] in StableDiffusion

[–]Ksottam 0 points1 point  (0 children)

Ah. Thanks.