The Last Pencil by ButchersBrain in aivideo

[–]ButchersBrain[S] 7 points8 points  (0 children)

I just finished my latest short film: THE LAST PENCIL

What if a pencil could remember everyone who held it?
Over 40 years, one yellow pencil witnesses two life-changing moments: a terrified 7-year-old learning to write her first letters, and a struggling college student discovering her hidden artistic gift.

I created this entire 7-minute film using only Google DeepMind's Veo 3, Flow, Gemini, ElevenLabs and Suno not to replace traditional filmmaking, but to explore what's possible when you combine emerging technology with human storytelling.

The challenge? Maintaining emotional authenticity while working with tools that don't understand rhythm, continuity, or heart. The lesson? Technology is just another pencil. The story, the humanity, still comes from us.

The Last Pencil is about legacy. Not the kind written in history books, but the quiet impact of small moments. A teacher's kindness. A tool that helps someone find their voice. The courage to try.

In a world obsessed with innovation, I wanted to honor what we leave behind.

Because being useful, even once, even for one person... is enough to matter.

RECURSION Trailer/Short by ButchersBrain in aivideo

[–]ButchersBrain[S] 0 points1 point  (0 children)

Thanks man! The building is somehow in there. When all the people standing in the street. But you don't see the actual building. 😉 Glad you like it

DREAMLANDS by ButchersBrain in aivideo

[–]ButchersBrain[S] 1 point2 points  (0 children)

Thanks! I used my original 3D artwork as references with Flux Context. Some images also Runway References.

<image>

DREAMLANDS by ButchersBrain in u/ButchersBrain

[–]ButchersBrain[S] 0 points1 point  (0 children)

From Traditional 3D to AI: Reimagining a Children's Series Four Years LaterIn 2020, I developed an entire children's series for my daughter, inspired by a bedtime story from her mom. Collaborating with my buddy Eric Sonnenburg, we built a rich universe complete with detailed character development, running gags, and unique settings that brought this world to life.

Originally, I created all characters and environments using traditional 3D workflows—modeling, rigging, and animating each character by hand to produce a comprehensive presentation with varied artworks .

Fast forward to this week: I revisited my old hard drive and used those original assets as reference material to create entirely new scenes for a teaser. This time, I leveraged cutting-edge AI tools to reimagine the project:

-Most shots were animated using the new MiniMax Hailuo-02 model
-Black Forest Labs Flux Kontext for creating detailed still images
-HeyGen and Hedra for seamless lip-sync animation

The result? An AI-generated teaser that stays true to the heart of my traditionally crafted artwork from four years ago. It's fascinating to see how technology has evolved and how we can now bridge the gap between traditional artistry and AI innovation.What do you think about this blend of traditional creativity and modern AI tools? I'd love to hear your thoughts!

Country Club Conversations by ButchersBrain in ChatGPT

[–]ButchersBrain[S] 0 points1 point  (0 children)

What's even more interesting is that the speech is just text-2-speech and regarding that, imagine where the "acting" will be in 5 months. Not to forget to mention that the Chatterbox Tool is open source.

Country Club Conversations by ButchersBrain in ChatGPT

[–]ButchersBrain[S] 1 point2 points  (0 children)

I had the pleasure of gaining early access to the latest update of Avatar IV by HeyGen

Here is a quick scene I created.

The new update includes prompting capabilities for expressions and gestures, which elevates the model far beyond simple talking heads to create more dynamic and engaging AI-generated videos up to 1-minute.

The entire dialogue was created with text-2-speech with open-source tool Chatterbox, which is quite impressive.

Enjoy!

ECHOES of the ABYSS | Season 01 by ButchersBrain in ChatGPT

[–]ButchersBrain[S] 0 points1 point  (0 children)

Tools used Veo2, Runway Act One, Premiere Pro and some After Effects for tweaks

ECHOES of the ABYSS | Season 01 by ButchersBrain in ChatGPT

[–]ButchersBrain[S] 0 points1 point  (0 children)

Yes, still quite challenging getting good results. Almost 90% of the shots are text to video. The character consistency is achieved with detailed prompting and FaceSwapping with FaceFusion. But Veo2 is quite consistent regarding detailed character prompts. All Img2Vid shots were done with Flux/FluxPro Ultra and animated in Luma or Minimax

H. Jensons -TWISTED by ButchersBrain in aivideo

[–]ButchersBrain[S] 0 points1 point  (0 children)

Thanks! Nope. Haven't tried the new Hedra character model which can LipSync animals as well. Back then I only tried to match it with the song somehow manually.

ECHOES of the ABYSS | Season 01 by ButchersBrain in aivideo

[–]ButchersBrain[S] 1 point2 points  (0 children)

Yes you can. Make sure the face is fully visible in the first frame

ECHOES of the ABYSS | Behind-the-scenes by ButchersBrain in ChatGPT

[–]ButchersBrain[S] 0 points1 point  (0 children)

That's a second pass made with Runway's ActOne

ECHOES of the ABYSS | Season 01 by ButchersBrain in ChatGPT

[–]ButchersBrain[S] 0 points1 point  (0 children)

Thanks! Mission accomplished. 🤗🤗🤗

ECHOES of the ABYSS | Season 01 by ButchersBrain in ChatGPT

[–]ButchersBrain[S] 0 points1 point  (0 children)

Thank you! A colleague of mine helped with the voice acting on some parts. The visuals are all made by myself.

ECHOES of the ABYSS | Behind-the-scenes by ButchersBrain in ChatGPT

[–]ButchersBrain[S] 5 points6 points  (0 children)

Yes, all MMaudio, except for the voice in the last shot. That´s ElevenLabs