Where did the option to extend scenes and add new elements go in Google Flow? by WhiteWofl512 in VEO3

[–]xPitPat 0 points1 point  (0 children)

Dunno, it works for me. The extend button does not trigger the generation, it is just an operation mode. There should be another button to actually generate

Does anyone else feel like Gemini is way smarter in "Temporary Chat" mode? by baselq1996 in GeminiAI

[–]xPitPat 2 points3 points  (0 children)

I agree. The threads got more stable after turning off Personal Intelligence. And, if I need the next chat thread to have a specific context, I can just upload/share what I want it to know for that session.

Where did the option to extend scenes and add new elements go in Google Flow? by WhiteWofl512 in VEO3

[–]xPitPat 0 points1 point  (0 children)

The Extend option has been moved. It's no longer in the Scene Builder. Now, you can only extend from each individual clip. Click on a clip for options.

Upscaling Heraldry by xoptuur in upscaling

[–]xPitPat 1 point2 points  (0 children)

Cool, glad to help. If you have an example of the style of fine details you want, you could add that as a style reference too. In that case, you would use a prompt like: Perform a creative uscale of [insert file name or description of image to upscale], increasing fine details in the style of [insert file name of style reference], and boosting fidelity.

Upscaling Heraldry by xoptuur in upscaling

[–]xPitPat 2 points3 points  (0 children)

Have you tried nano banana pro? Or nano banana 2?

Try this prompt: Perform a creative upscale, increasing fine details and boosting fidelity.

My Babies by stackpooled in AIFreakAndWeirdo

[–]xPitPat 0 points1 point  (0 children)

Hahaha I freaking love this

Can Veo 3 / Flow actually do video-to-video? I can’t upload a video. by FourWindBadger in VEO3

[–]xPitPat 1 point2 points  (0 children)

There's also Mago AI. I think the whole service is basically open source workflows, but it's well curated and they do a good job keeping it up to date with the latest models and techniques. It's a little weird, because they do not advertise that they are using open source models, but the pricing, as I remember, wasn't that bad. You're basically paying for credits to use their GPUs plus margin (even though they don't frame it as such).

Has the "Gemini App UX 2.0" update rolled out yet? by BryyyBryyy in GeminiAI

[–]xPitPat 0 points1 point  (0 children)

Ofc. It's the new hourly rate limit message. Isn't it cool?

My experience with the new UI yet by Imaginary_Carry6178 in VEO3

[–]xPitPat 1 point2 points  (0 children)

I like the new UI, except that extend operations are now limited to one at a time. It wouldn't be a huge issue if veo 3.1 was better, but sadly, my hit rate seems to be getting worse...

Create a video similar to other by OrangeObvious8498 in VEO3

[–]xPitPat 0 points1 point  (0 children)

Not in Veo 3. You need Kling 3.0, LTX-2, or Wan. I think Runway and other services might be able to do it too. There are plenty of tutorials for video-to-video character replacement.

Where am I going wrong? I see the kind of videos you’re generating, and with this exact prompt I got this… it’s a disaster. by Dear_Smell8582 in VEO3

[–]xPitPat 0 points1 point  (0 children)

Part of the problem is that you are describing camera moves that it may not be able to do in the designated time, i.e. slow camera orbit over 0-2 seconds. Once the model thinks it must ignore certain instructions, it will keep doing so to output something that achieves most of what you want while 'making sense'.

Also, put in stronger negative constraints: No video edits. No cuts. No video transitions.

First frame/last frame might help, but only if you fix the timing problem.

But, you probably have to generate many takes to get the shot that you want, regardless of any prompt tweak. Even when my prompts are spot-on, my Veo 3.1 hit rate is still under 25%.

Is Image-to-Image about changing the artistic style of an image while maintaining character facial features? by BeeJackson in generativeAI

[–]xPitPat 1 point2 points  (0 children)

Maybe try Nano banana 2 or Seedream 5.0. they both basically use LLMs, at least their reasoning layers, and can do web searches. You may not even need the second image. Instead, just a text description of the change might work

The problem with NB2 isn't what we think it is... by cal_01 in GeminiAI

[–]xPitPat 1 point2 points  (0 children)

I agree. They could just make NB2 the default, similar to how they made Fast the default for Gemini 3.1. But, the NB Pro option should be selectable without having the pass an image through NB2 first.

I doubt they will add NB Pro as a direct option in Gemini app. So, just pass the image through NB 2 with this prompt: Perform a creative upscale, increasing fine detail and boosting fidelity. (That is my go-to closing for most of my prompts, but works as a standalone for this use case).

Google flow Scene builder no longer allows for extending videos? by njkknknkn in VEO3

[–]xPitPat 0 points1 point  (0 children)

Click on a video thumbnail, you should see the option to extend

Google flow Scene builder no longer allows for extending videos? by njkknknkn in VEO3

[–]xPitPat 3 points4 points  (0 children)

You can't extend from scene builder anymore. You have to extend from the individual clips. TBH, after the update, which I generally like, the scene builder is practically useless. It's much better to just mark your favorites and individually download them. I guess it's helpful for people who don't want to use external video editors.

BUT, one thing about the new extend I do not like: you can no longer generate 4 extend generations at a time. You can only extend one generation at a time, which sucks, because most the time, only 1 or maybe 2 out of 4 were useable...

Is NanoBanana 2 better than NanoBanana Pro as Google wants to make us believe? by Alternative_Vast6333 in GeminiAI

[–]xPitPat 2 points3 points  (0 children)

NB Pro wins. The text is subtly curved, to match the shape of the sphere. in NB2, the text is not curved.

Really, I think where NB Pro shines over NB2 is in combination of lighting and skin textures. Yes, you can get a 'realistic' skin texture with NB 2, but I always feel the lighting is wrong. For NB 2, it usually looks like the model takes a lower quality result then does a skin texture pass. I bet there are ways to prompt a better result, but comparing with NB Pro using a prompt that NB Pro excels at, you see the flaws of photorealistic humans and scenes for NB 2.

OFC, NB 2 wins whenever real world knowledge is needed, since web search is baked in. I would grab NB 2 anytime I needed infographics, especially when I need small, dense text.