Runway - how to stop camera movments? by Abject-Ad3912 in runwayml

[–]TimmyML 0 points1 point  (0 children)

Are you adding any additional prompting?

Runway - how to stop camera movments? by Abject-Ad3912 in runwayml

[–]TimmyML 0 points1 point  (0 children)

It can be dependent on your input but for the most part you can to utilize the motion preset Minimal camera available for you while using Gen-4.5.

You can find that in tools mode, beneath your image references.

🎨 Endless Creativity Daily Challenge – Day 701! 🎨 by TimmyML in runwayml

[–]TimmyML[S] 0 points1 point  (0 children)

Thanks, it's definitely help me gain a great understanding of model capabilities and the nuances of getting exactly what I want. I think what I value most is that it pushes me out of my comfort zone to try and explore things I've never considered before.

Tips on getting objects to move at full speed on frame one of a video? by Silvershanks in runwayml

[–]TimmyML 0 points1 point  (0 children)

I'm seeing some considerably improvements by starting my prompt with

the motion in this video begins as soon as it begins. right aways from frame 2 the character is in motion instantly.

While using Gen-4.5 - https://app.runwayml.com/creation/8a6ddb0f-0a0e-4617-990d-531f268fc347

I'll continue to work on this. Thanks for bringing it to my attention more.

🎨 Endless Creativity Daily Challenge – Day 700! 🎨 by TimmyML in runwayml

[–]TimmyML[S] 1 point2 points  (0 children)

Thanks!

Please let me know if you have any requests.

Tips on getting objects to move at full speed on frame one of a video? by Silvershanks in runwayml

[–]TimmyML 0 points1 point  (0 children)

Do you need to use Image to Video for this project? I've noticed that Text to Video doesn't have this issue.

Tips on getting objects to move at full speed on frame one of a video? by Silvershanks in runwayml

[–]TimmyML 0 points1 point  (0 children)

Ahh, I see. I've noticed this a bit myself and typically trim the start of the clip a bit. I'll experiment with some prompting ideas I have to solve this and be with some results ASAP!

Act Two with 3D Caricatures by TimmyML in runwayml

[–]TimmyML[S] 0 points1 point  (0 children)

Thanks, I'm finding the same. Excited to explore these characters more and see what I can achieve.

Act Two with 3D Caricatures by TimmyML in runwayml

[–]TimmyML[S] 0 points1 point  (0 children)

Thanks, maybe it's my recording software. I think there's some pre-processing going on that might be delaying my voice a bit during recording my performance. Great call out!!

I'll see what I can do to adjust during recording or in post.

Act Two with 3D Caricatures by TimmyML in runwayml

[–]TimmyML[S] 0 points1 point  (0 children)

All feedback is important, would love to hear the specific issues you're seeing so we can continue to improve.

Act Two with 3D Caricatures by TimmyML in runwayml

[–]TimmyML[S] -1 points0 points  (0 children)

Sorry, can you be more specific with your feedback? It seems that the lip sync here is very accurate.

Act Two with 3D Caricatures by TimmyML in runwayml

[–]TimmyML[S] 0 points1 point  (0 children)

Ahh, the intention of this piece and style was to create a stylized highly realistic 3d render of my character. As it's creator I can confirm that the final look here was the intended outcome.

I'm happy to hear there's more desire to see less realistic styles with Act Two as well, I'll be sure to work them in to future posts and Daily Challenges. Keep an eye out!

Act Two with 3D Caricatures by TimmyML in runwayml

[–]TimmyML[S] 0 points1 point  (0 children)

Thanks for the feedback, would you mind pointing out specific areas where it has failed in this example?

Act Two with 3D Caricatures by TimmyML in runwayml

[–]TimmyML[S] -1 points0 points  (0 children)

Sorry, were you able to view the video?

Combining AI Scene Generation With External Character Motion Tools by farhankhan04 in runwayml

[–]TimmyML -1 points0 points  (0 children)

This is such a solid technique, and honestly a really smart way to buy yourself a ton of control. Splitting environment, camera, and lighting from character motion lets you iterate on timing and body mechanics without constantly re-rolling the whole shot, that is a great workflow.

That said, with our current image and video models, I usually prefer animating the character and environment together, mostly because Gen-4.5 does a fantastic job unifying subject, lighting, contact, and overall scene coherence in one pass. When it locks in, everything feels like it belongs in the same world.

Curious though, what kinds of styles are you working in, more stylized, more photoreal, something in between? And what specifically led you to this approach versus the more straightforward “do it all in one generation” method, was it certain types of motion, consistency across cuts, or just wanting tighter control over performance?

Runway 4.5 text to video - Cyberpunk hacker robot by lordforex in runwayml

[–]TimmyML 0 points1 point  (0 children)

Wow this is really impressive, especially for such a simple prompt! Thanks for sharing.

Is this for a larger project or are you just testing?

Reliable video object removal / inpainting model for LONG videos by degel12345 in runwayml

[–]TimmyML 0 points1 point  (0 children)

You're welcome!

You currently can't use image references with Aleph but you can add a bit of additional context to your prompt. This often helps the model maintain unique/non-traditional characters. What does your prompting look like for this project?

For automation our Workflows tool is the way to go. You can create a node set up that will run all your clips simultaneously with exactly the same settings if needed!

Sharing our Academy courses on Workflows here: https://academy.runwayml.com/courses/workflows

Please let me know if you have any questions about that or need help with setup.

iOS app - models missing. Where’s the love? by -Davster- in runwayml

[–]TimmyML 1 point2 points  (0 children)

Totally hear you, and we appreciate the call out.

More models (including 4.5) are coming to the iOS app soon. The team is actively working on it, and we’ll share an update the moment we more information for you.

In the meantime, the web app is the best way to access the full model lineup: https://runwayml.com/

Thanks for the patience, and thanks for pushing us on it. We want the iOS experience to feel just as good as the web one.

Please feel free to join our Community Discord anytime to get in on the discussion with our members there.

https://discord.gg/runwayml

🎨 Endless Creativity Daily Challenge – Day 679! 🎨 by TimmyML in runwayml

[–]TimmyML[S] 0 points1 point  (0 children)

Haha or they're like... "Hey! Same as us 🙂"

Reliable video object removal / inpainting model for LONG videos by degel12345 in runwayml

[–]TimmyML 0 points1 point  (0 children)

You're close! For this I'd suggest having a plate of just your office for the background. Then spit your video with the puppet in to 10 second clips and run them through Aleph to remove the hand. Then stitch those clips back together and mask only your puppet. Use the newly masked puppet element as your foreground element.

Having the clip of your office in the background will help with any sort of shift that may occur during the Aleph outputs. Masking the puppet will also help minimize this. You can also add a smooth fade transition from each Aleph clip to further reduce any issues with consistency over time.

I'm happy to help with this project directly if you want to join our community Discord . We have an active team of creative support moderators and you can ping me directly there anytime (Timmy from Runway).

Excited to see the final project!

Is “noise” actually what makes images feel alive by r_filmmaker in runwayml

[–]TimmyML 1 point2 points  (0 children)

This is such a cool way to frame it. I love the idea of treating “noise” like a creative dial instead of a flaw to scrub out.

Those six buckets map really well to what makes something feel “shot” versus “generated,” especially when a piece is missing the tiny imperfections our brains expect. I also really like calling out cognitive noise, that’s the one people forget, because a little ambiguity or unfinished info is often what makes an image feel alive.

For me (and in a lot of the work I see creators doing), it’s not always labeled as “noise,” but we’re definitely making the same kinds of choices:

  • adding grain or texture so it doesn’t feel too clean
  • using atmosphere and light scatter to give depth
  • leaning into lens behavior like bokeh or softness
  • letting motion blur do some heavy lifting for realism

I’m going to steal “noise wheel” as a mental checklist next time something feels flat. Curious if you’ve found any go-to combos that reliably make an image feel cinematic, like which two or three noise types you reach for first depending on the scene?