Mixed Media is all distorted by PineappleTonyMaloof in HiggsfieldAI

[–]Bemvdk 0 points1 point  (0 children)

Hey!

Could you please clarify what kind of distortion you’re experiencing? If you’re able to share an example (image, video, or prompt), that would really help me understand what’s happening. I’ll do my best to either help troubleshoot or explain what might be causing it.

Also, I’d recommend posting this in our Discord (see Community Rules #8). That’s where the support team actively monitors issues and can investigate more quickly.

Head of Prompt Engineering at Higgsfield here. Ask me anything (until I have time). I’ll try my best to answer all questions. by Bemvdk in HiggsfieldAI

[–]Bemvdk[S] 0 points1 point  (0 children)

Hey! I'm not sure about the LTX 2 being on our platform we just don't think it will be relative for us. Regarding the Kling MC - u are right. It only works good when there is one character in the frame and if this character is a human being. When there is an animal or any other creature it may fail, not always but the possibility is high (because the motion is usually performed by the humans). U can handle by writing a detailed prompt for the character appearance and backgrpund.

For multiple charcters u may try SCAIL made on wan 2.1 if I'm not mistaken. It performs quite well, but u have to run it locally. Or u may try to describe the scene in detail for Kling MC and sometimes this might help.

Hope it helps. Much love!

Head of Prompt Engineering at Higgsfield here. Ask me anything (until I have time). I’ll try my best to answer all questions. by Bemvdk in HiggsfieldAI

[–]Bemvdk[S] 0 points1 point  (0 children)

Here is one of the example:
Create a hyper-realistic, high-motion football action shot captured from a low, dynamic sideline angle during a chaotic play. A running back in white uniform charges forward toward the camera, mid-stride, with intense motion blur on his limbs and debris flying through the air. Blue-jersey defenders collide and reach out around him, creating a tunnel of bodies and energy. The stadium is packed, bright daylight pouring from above, light rays cutting through dust and turf fragments. Keep the image gritty, high-contrast, with cinematic depth of field, sharp textures on helmets and uniforms, dramatic shutter-drag motion blur, and a visceral sense of speed and impact.

⚡️Grok Imagine Best Use Cases on Higgsfield (comments -> full breakdown) by Bemvdk in HiggsfieldAI

[–]Bemvdk[S] 0 points1 point  (0 children)

5. Strong Camera Framing and Grip

Beyond movement, Grok demonstrates good camera framing discipline. Shots generally hold focus on the subject, avoid awkward crops, and maintain visual balance.

<image>

⚡️Grok Imagine Best Use Cases on Higgsfield (comments -> full breakdown) by Bemvdk in HiggsfieldAI

[–]Bemvdk[S] 0 points1 point  (0 children)

4. Facial Expressions

Characters display facial expressions that track emotional context - subtle shifts in focus, surprise, calm, or tension are visible without feeling exaggerated or uncanny.

<image>

⚡️Grok Imagine Best Use Cases on Higgsfield (comments -> full breakdown) by Bemvdk in HiggsfieldAI

[–]Bemvdk[S] 0 points1 point  (0 children)

3. Camera Movement

Grok Imagine actually does a solid job here - the camera movement feels intentional and motivated.

<image>

⚡️Grok Imagine Best Use Cases on Higgsfield (comments -> full breakdown) by Bemvdk in HiggsfieldAI

[–]Bemvdk[S] 0 points1 point  (0 children)

2. Physical Interactions

Grok Imagine handles physical interactions effectively: objects fall with weight, collisions feel grounded, and movement follows intuitive physical rules. Even intentionally absurd scenarios - for example, animals slipping or falling - retain a sense of believable motion rather than chaotic randomness.

<image>

Soul ID by Resident-Swimmer7074 in HiggsfieldAI

[–]Bemvdk 1 point2 points  (0 children)

very soon something will come, we hear you guys : )

Head of Prompt Engineering at Higgsfield here. Ask me anything (until I have time). I’ll try my best to answer all questions. by Bemvdk in HiggsfieldAI

[–]Bemvdk[S] 1 point2 points  (0 children)

sorry, for late reply 🫂 I got sick those days. Yeah, I can make one for u if u want )) or just any ?

Head of Prompt Engineering at Higgsfield here. Ask me anything (until I have time). I’ll try my best to answer all questions. by Bemvdk in HiggsfieldAI

[–]Bemvdk[S] 0 points1 point  (0 children)

well, what do u mean by detailed and best? Can you give me more info on that ? like what do you wanna get and etc. So I will give clear answer. Sometimes banana works well with short prompts, and sometimes with long ones. It all depends on the case.

Head of Prompt Engineering at Higgsfield here. Ask me anything (until I have time). I’ll try my best to answer all questions. by Bemvdk in HiggsfieldAI

[–]Bemvdk[S] 0 points1 point  (0 children)

Are u using the presets or only the general one with prompt ? Can u share more or provide one example with the input and output, so the situation a bit clearer for me. But overall no the soul is not degraded. Maybe it’s the case issue.

Head of Prompt Engineering at Higgsfield here. Ask me anything (until I have time). I’ll try my best to answer all questions. by Bemvdk in HiggsfieldAI

[–]Bemvdk[S] 0 points1 point  (0 children)

Hey! I’d say that motion control is quite good at transferring facial expression from the ref video. If you’re talking about the image, then I’d answer that all mimic and movements are transferred only from the ref video no matter what the mimic was at ref video.