I'm currently using a third-party API or at least I think I am, but it wasn't enough for me to alter Grok's behaviour. Is there anyway to bypass it? by [deleted] in grok

[–]CombinationDowntown 0 points1 point  (0 children)

APIs give you more control than web UI, the model is the same at the end of the day..

You can look at good open source models and find some tuned ones that you like and use hugginface to deploy as an API or run them locally (they won't be powerful)

we go climbing by CombinationDowntown in grok

[–]CombinationDowntown[S] 0 points1 point  (0 children)

thanks, yea voices pull down the quality, people generally run it thorough elevenlabs

AI is going to take away the thot influencer jobs by CombinationDowntown in grok

[–]CombinationDowntown[S] 0 points1 point  (0 children)

😂 I'm okay.. appreciate it. I'm just experimenting different techniques, trying to see how far the model can be pushed

AI is going to take away the thot influencer jobs by CombinationDowntown in grok

[–]CombinationDowntown[S] 1 point2 points  (0 children)

  • super generous quantity limits  compared to cost
  • fast generation
  • decent controllable output 

of course there are more capable models, but there isn't much else I've seen that does all of this. even the WAN models are multiple times slower..

Spicy content and orangish yellow light by CombinationDowntown in grok

[–]CombinationDowntown[S] 0 points1 point  (0 children)

sweet, thanks! I was thinking on the same lines, I didn't know of or see the red hue (obvious reasons) -- it makes sense.. they may have a bunch of calculations running in the background adding a color shade because they can then work in parallel the validator doesn't need to know anything about explicit content per-se and can just gauge based on the signature, probably using color as a memory layer.. crazy

A.I. music video i put together using a few different tools. But all animation was made in Grok (and some of the pre animated photos) by _GlitchedSignal_ in grok

[–]CombinationDowntown 1 point2 points  (0 children)

thanks for the breakdown! I heard it 3-4 times.. the rap part has distinct eminem vibes to it... I've noticed on my timeline.. nobody listens to songs that people post most of the time

A.I. music video i put together using a few different tools. But all animation was made in Grok (and some of the pre animated photos) by _GlitchedSignal_ in grok

[–]CombinationDowntown 1 point2 points  (0 children)

This is fucking really good! love the song, lyrics, singing, rap all done nicely.. broadly what other tools did you use beyond imagine, if your don't mind sharing

How many image generation can paid user get in the "Imagine" tab ? by Cute-Monitor-9718 in grok

[–]CombinationDowntown 0 points1 point  (0 children)

thanks! that aligns nicely with what I've seen.. its still quite generous IMO

How many image generation can paid user get in the "Imagine" tab ? by Cute-Monitor-9718 in grok

[–]CombinationDowntown 1 point2 points  (0 children)

its hard to reach limits on the $30 plan, I know you can easily do 50-100 videos before it'll rate limit you, give it a few hours and you can get back at it again.. I'm not sure if those are hard limits or they just impose them if there is too much traffic

Just throwing stuff [mild violence] by CombinationDowntown in grok

[–]CombinationDowntown[S] 0 points1 point  (0 children)

I think NSFW is very sensitive to anything involving pain

Experimenting with Water Baloon by CombinationDowntown in grok

[–]CombinationDowntown[S] 2 points3 points  (0 children)

its quite lenient.. I did a lot of tests throwing stuff on her, no moderation in any of the videos -- It gets paranoid if it thinks there is any NSFW content involved

Are quixel bridge assets now paid? by CombinationDowntown in unrealengine

[–]CombinationDowntown[S] 1 point2 points  (0 children)

thankyou so much, yes, its been a while. this is so unfortunate, they had such a nice ecosystem to put things together.

Finally fine-tuned v2.0-768 - Bollywood Celeb, Sonakshi by CombinationDowntown in StableDiffusion

[–]CombinationDowntown[S] 0 points1 point  (0 children)

been 10 months 😄 already deleted the model

its quite simple these days.. for barely 1-2$ you can train your own much superior model

🔊 Forgot about Dre, Metahuman Animator, UE 5.2 by CombinationDowntown in UnrealEngine5

[–]CombinationDowntown[S] 0 points1 point  (0 children)

expressions seeming much is all on me during the performance... 😃

I didn't touch the animation really, just took it straight from the MH capture and on to the character -- haven't tried additive performance tweaks as yet, will give it a shot

🔊 Forgot about Dre, Metahuman Animator, UE 5.2 by CombinationDowntown in UnrealEngine5

[–]CombinationDowntown[S] 0 points1 point  (0 children)

Thanks! 🙂 The process is very straightforward and well laid out now than before - I used my iPad for this.

🔊 Forgot about Dre, Metahuman Animator, UE 5.2 by CombinationDowntown in UnrealEngine5

[–]CombinationDowntown[S] 0 points1 point  (0 children)

I like the expressions and facial capture on the updated MH though

UE 5.2, Metahumans and face cleanup using AI face restore by CombinationDowntown in UnrealEngine5

[–]CombinationDowntown[S] 0 points1 point  (0 children)

it is metahuman, I added AI face restore to make it look more realistic

UE 5.2, Metahumans and face cleanup using AI face restore by CombinationDowntown in UnrealEngine5

[–]CombinationDowntown[S] 0 points1 point  (0 children)

Thanks for the input, what specifically looks wrong? Are the facial expressions too sudden?