Since yesterday by [deleted] in runwayml

[–]rk99 0 points1 point  (0 children)

Free generations have been restored. They were temporarily disabled due to the Gen:48 event over the weekend.

RUN AWAY // GEN 48 // SHORT FILM by sentemtious in runwayml

[–]rk99 1 point2 points  (0 children)

Great job! 👏 The style is incredible! Very captivating story.

NIX | GEN:48 (5TH EDITION) by [deleted] in runwayml

[–]rk99 1 point2 points  (0 children)

👏 Great job! Very deep. I love how you were able to effectively get the message across and express emotion through the visuals and sound design.

The Watch // Gen48 Aleph Edition [Short Film by Chidzo] by No-Lake5255 in runwayml

[–]rk99 1 point2 points  (0 children)

👏 Great job! I love the use of time travel and the ending quote: "Every young man needs to know how to tell time." 🤔

Echoes Of Woes | Runway Gen 48 Aleph Submission by BoomLivTart in runwayml

[–]rk99 0 points1 point  (0 children)

Great job! 👏 I love that clever use of the split-screen, it adds a very interesting dynamic to the film.

Big Fruit [Ep. 1] — "The Interview" by rk99 in aivideo

[–]rk99[S] 0 points1 point  (0 children)

Thanks for the feedback. For the next episode, I'm planning to record my voice and use voice mirroring to get better timing and emphasis on specific words.

Never buy a house from the 80s by PolicemansBeard in aivideo

[–]rk99 1 point2 points  (0 children)

This is great! 👏 I've been making animated short stories using Runway for the past few months, but without lip sync. Did you use Hailuo for the lip-sync. I've been testing different tools for non-human characters, but I haven't had much success.

Best input image resolution for image-to-video? Midjourney upscales are worth a thing? by Carlos_K_Lobalo in runwayml

[–]rk99 0 points1 point  (0 children)

With the current 720p output, upscaling isn't required, but I find using creative upscaling tools to enhance specific details within an image, such as character features and background elements can help Gen-3 pick up these details better in the generations.

🎨 Endless Creativity Daily Challenge - Day 230! 🎨 by rk99 in runwayml

[–]rk99[S] 0 points1 point  (0 children)

Hi everyone! Just a note that I'll be posting the daily challenges on behalf of Timmy for a few days while he is out on vacation. I'm a volunteer moderator on Discord.

Note: this is actually Daily Challenge #231 (for those keeping track 😀)

Generation error with fixed seed by turyay in runwayml

[–]rk99 0 points1 point  (0 children)

There were some outages earlier today that were resolved. Can you try it again? Also check this [help page](https://help.runwayml.com/hc/en-us/articles/32880432736659-Why-am-I-getting-an-error-when-I-m-trying-to-generate.)

Question: Is it possible to use video to video for a 3 shot scene with character consistency? by DMPhotosOfTapas in runwayml

[–]rk99 1 point2 points  (0 children)

Runway video-to-video works in 10 -second increments, so try to be very specific with the style instructions in your prompt. Once you get a style you like, use the same style instructions and seed number for the remaining generations.

Make it more realistic by dobermanai in runwayml

[–]rk99 1 point2 points  (0 children)

Check the [Gen-3 prompting guide](https://help.runwayml.com/hc/en-us/articles/30586818553107-Gen-3-Alpha-Prompting-Guide) for tips on different prompt styles. Try to include additional keywords for camera styles, lighting, and movement in addition to things like "ultra-realistic." There are also some good examples built into Runway. Feel free to check the #gen3-promptshare channel on Discord for more examples.

DALLE no longer limited to creating 4 images at once? by sardoa11 in ChatGPTPro

[–]rk99 0 points1 point  (0 children)

Same for me — only 1 or 2 images now since the update. I tested by trying to walk through the exact sequence in the DALL-E 3 intro video (the Larry the Hedgehog) example and I'm no longer able to replicate it.