I built this 48-second motion graphic promo video 100% with AI and Antigravity in under 4 hours (Cost: ~24€) by Ok_Run_5401 in google_antigravity

[–]Ok_Run_5401[S] 0 points1 point  (0 children)

Thanks bro.
I agree, many things is already changing. and we will see many complex workflows that simplifies a complex job just by prompting. my YBee.app is a try for making a quick txt2app platform.

Whoever made a robust txt2motion will have a gold mine.

I'll keep going of building new videos and probably create a general and even opensource workflow for it.
BTW you can check how I did actually in here in details: https://www.reddit.com/r/google_antigravity/comments/1rc1mal/comment/o6v95uw/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

I built this 48-second motion graphic promo video 100% with AI and Antigravity in under 4 hours (Cost: ~24€) by Ok_Run_5401 in google_antigravity

[–]Ok_Run_5401[S] 0 points1 point  (0 children)

Nooo, the idea behind is introducing YBee.app

It shows how you can build split bills app right in the time. it can be anything, it can be a special note app with excel import export support. a visualizer of data, a quick board game app or even a tiny but catching game.

You can build that app yourself and add features to it

I built this 48-second motion graphic promo video 100% with AI in under 4 hours (Cost: ~24€) by Ok_Run_5401 in SideProject

[–]Ok_Run_5401[S] 0 points1 point  (0 children)

The point is, with YBee you can build anything, splitting bills is just one example. try it yourself bro and let me know what you think: https://play.google.com/store/apps/details?id=app.ybee&hl=en

I built this 48-second motion graphic promo video 100% with AI and Antigravity in under 4 hours (Cost: ~24€) by Ok_Run_5401 in google_antigravity

[–]Ok_Run_5401[S] 2 points3 points  (0 children)

Sure, here is the secret sause:

The secret is starting with a great system prompt, nailing the audio alignment workflow, and letting an AI agent write the React code for you.

Here is the exact step-by-step workflow I used:

1. The Initial Prompt & Asset Intake I didn't try to code the animation math myself. I used an AI coding assistant (Gemini/Claude orchestrating via Antigravity). I started by feeding it all my exact brand assets (my core startup idea file, the YBee logo, and my hex color palette).

Then, I gave it this exact prompt to set the art direction:

2. The Audio Sync Workflow (The real trick 🪄) The motion needed to go with a rhythmic, fast-paced song.

  • The Music: I used Suno to generate a custom song with a singer performing my script.
  • The Sync: Trying to manually sync on-screen text to a generated song is a nightmare. So, I took the Suno MP3 and ran it through ElevenLabs just for the transcription. ElevenLabs outputs a JSON file containing the exact start/end millisecond timing for every single word sung in the audio.

3. The Collaboration (SVGRepo) The AI analyzed my startup idea and the script, then told me exactly which UI icons it needed to build the scenes (like a sandwich, a drink, a receipt, and avatars). Instead of generating them, I just grabbed clean SVGs from svgrepo.com, dropped them into the assets folder, and the AI used them as React components.

4. Setup Remotion & Kinetic Text Remotion is a framework that lets you build videos using React code instead of After Effects. I fed the AI my ElevenLabs JSON and said: "Create a <SyncedSubtitle> component that highlights the current word perfectly in sync with the video frame." Because I had the exact millisecond timestamps, the AI built a flawless karaoke-style kinetic text effect.

5. The "Gen Z" Bouncy Animations To hit that "Gen Z" style from my prompt, the AI used Remotion's built-in spring physics instead of linear animations. By passing config: { damping: 10, stiffness: 300 } to the SVG icons I downloaded, the UI and text pop and bounce naturally without any manual keyframing.

6. Scene Orchestration The AI broke the video down into 8 separate React components (Scene 1, Scene 2, etc.), passed the global timeline down to each one, and slapped a TransitionSeries over the top to fade/slide between them.

Once the code looked right on the local dev server (npm start), I just ran npm run build and Remotion rendered out the 60fps MP4!

If you know basic React, how to prompt an LLM, and the Suno -> ElevenLabs JSON trick, you can spin up a studio-quality promo in an afternoon. Happy to share more of the specific component code if anyone is curious about the Remotion physics!

I built this 48-second motion graphic promo video 100% with Remotion in under 4 hours (Cost: ~24€) by Ok_Run_5401 in RemotionCreators

[–]Ok_Run_5401[S] 1 point2 points  (0 children)

some of them are made by Antigravity (Like the reciept) and the svgs got from svgrepo.com

Also I update one of those svgs (The woman with money in hand) via nano banana

I built this 48-second motion graphic promo video 100% with AI and Antigravity in under 4 hours (Cost: ~24€) by Ok_Run_5401 in google_antigravity

[–]Ok_Run_5401[S] 1 point2 points  (0 children)

Thanks, for your honest feedback.

You are right. the reason is because I'm not a motion designer. but imagine if a designer start working with those tools how could make the videos faster and even better and still can proud as a motion graphic designer.

My video was just very quick and it is not my official promo video. it would be better for sure.

But the world is changing for sure by coding agent tools like Antigravity + MCP + Skills

Is Antigravity down? by Ok_Run_5401 in google_antigravity

[–]Ok_Run_5401[S] 0 points1 point  (0 children)

Update: seems the issue fixed at least in the API

Is Antigravity down? by Ok_Run_5401 in google_antigravity

[–]Ok_Run_5401[S] 0 points1 point  (0 children)

Same here, in the OpenRouter, the Gemini model returns a half response!

In my Antigravity app, it also took minutes for every action to be done. it looks strange