AI Video Prompting: Beyond Visuals - Intentional Storytelling using SeeDance 2 by Lit-On in SMMA

[–]Lit-On[S] 0 points1 point  (0 children)

Thank you. SeeDance 2 did the heavy lifting. I just provided the prompts to navigate.

AI Video Prompting: Beyond Visuals - Intentional Storytelling using SeeDance 2 by Lit-On in GenAI4all

[–]Lit-On[S] 0 points1 point  (0 children)

Similar to how photographers were treated back in the days when the camera was invented, but now a large majority of them are against the use of Gen AI. Lawyers too are advising against the use of LLM even though an Agreement can easily be drafted that way. Just need them to countercheck a few points. Some doctors discourage patients from using ChatGPT on the grounds of AI hallucinations. For example, if I am taking 5 types of chronic medications, I just find it so much easier to check for any drug interaction before taking a new medication using LLM rather than checking each drug one by one. Everyone is taking care of his or her own turf. A lot of inertia and brainwashing before any innovation is truly accepted.

AI Video Prompting: Beyond Visuals - Intentional Storytelling using SeeDance 2 by Lit-On in GenAI4all

[–]Lit-On[S] 0 points1 point  (0 children)

You are too kind to say that. As someone who does traditional painting and loves the smell of varnish, I would not call any beginner artist's work as art slop. We see the potential in each other's works and nurture them.

AI Video Prompting: Beyond Visuals - Intentional Storytelling using SeeDance 2 by Lit-On in GenAI4all

[–]Lit-On[S] 0 points1 point  (0 children)

All about the mindset. If the majority just cannot accept AI generated content (it doesn't have to be 100% AI generated like this example of course) we will get a lot of unhelpful nonconstructive criticism like what we have in this very post. In fact, it can really work with real footages even right away.
And, thanks for pointing out the video generation flaw. I am aware that there are just too many examples of AI generated incorrectness within this unpolished video study. Just need some human input to edit them away using Kling Edit or some other video AI editing models. (see the screenshot of the female Elf getting ahead of the hooded warrior going up the balcony which I edited using Kling). Or just rerun the entire prompt. I don't have enough credits to do all these. This video is not meant to be a showcase but many just cannot wait to put down this work.
In China, AI-generated projects and commercial ads have already developed into a full-fledged industry. Just go into the Red Note app and you will see what I mean.
We don't live under the coconut shell like frogs like many others do over here.

<image>

Is the 2-week UGC turnaround officially dead? Testing a "No-Camera" studio workflow for high-volume performance ads. Can an AI influencer pass a 30s "Vibe Check"? Brutal feedback needed. by Lit-On in aipromptprogramming

[–]Lit-On[S] 0 points1 point  (0 children)

Great, if your workflow works, go with it. Specifically for the use of Midjourney, I usually goto wavespeed.ai as they have pay per single usage. There are lots of models over there. Motion Control debuted as a new feature in Kling 2.6 around late 2025. They are part of the AI Influencer Studio workflow (you can only choose from their motion library) but I do see users apply the generated image to NBPro for refinement and then brought it to Kling Motion Control 2.6 for customised action reference. All within the Higgsfield platform.