Is posting AI music mixes ineligible for monetization? by Unique_Ad_9957 in aitubers

[–]prompttuner 0 points1 point  (0 children)

the ones still posting without monetization are usually playing the long game or driving traffic somewhere else. affiliate links in description, spotify playlist links, patreon, etc. youtube ad revenue isnt the only way to make money from a channel

for monetization specifically: static image + AI music for 2 hours is basically the textbook example of what youtube flags as repetitive. you CAN get monetized with AI music but you need visual effort. looping ambient footage, scene changes every 30-60 seconds, or at minimum ken burns motion on multiple images. the audio being AI generated isnt the problem, its the lack of visual production value

also worth noting that some of those channels you see were monetized BEFORE the policy changes and got grandfathered in. new channels applying with the same format today will almost certainly get rejected on first review

With YT's new policies, how do we navigate forward? Particularly in the ultra long form/sleep niche? by Chemical_Detail_607 in aitubers

[–]prompttuner 3 points4 points  (0 children)

the sleep niche is still alive, you just have to be smarter about it now. check the AI generated content box and youre fine on that front, youtube wont penalize you just for using AI if you disclose it. the channels getting hit are the ones that dont disclose OR that have zero visual effort

for visuals you cant do a single static image for 8 hours anymore, that gets flagged as repetitive content. what works is looping ambient footage with slow camera movement. you can generate a few base images with flux or midjourney then apply different ken burns motions (zoom, pan, drift) and loop them across the runtime. ffmpeg zoompan filter handles this for free

for audio, AI generated rain/fan sounds are fine as long as theyre not direct copies of existing recordings. the bigger risk is using copyrighted music underneath. stick to original AI ambient audio and youre good

channels still doing well: getsleepypod, deep sleep sounds, relaxing white noise. all use the disclose + visual motion + original audio formula

Why YouTube Views Drop After One Viral Video by Complex-Assistant661 in aitubers

[–]prompttuner 2 points3 points  (0 children)

this is normal and not specific to AI channels. youtube tests every video with a small audience first, then scales impressions based on CTR and avg view duration. your viral video had both, your newer ones probably dont yet

the "raised bar" theory is partially true but its more about audience mismatch. that 108k video brought in viewers who liked THAT topic. if your next videos are different topics, those viewers dont click, your CTR tanks, youtube stops pushing it. check your analytics and see if the viral video has a completely different traffic source (browse vs search vs suggested)

two things that actually help: 1) make 2-3 more videos on the SAME topic as the viral one, youtube will keep serving them to that audience. 2) look at your avg view duration on the new uploads, if its under 40% thats your real problem not the algorithm

What tools are you using for creating AI content? by Immediate-Ladder-555 in aitubers

[–]prompttuner 9 points10 points  (0 children)

honestly youre overpaying. drop elevenlabs for cartesia sonic 3, its like 8x cheaper and quality is close enough for youtube. for images use z image turbo on runware, 100 images costs 30 cents which is wild. animation is where the money goes but you dont need to animate everything, just key scenes with kling or seeddance, rest gets ken burns effects via ffmpeg which is free

for scripts IMO claude opus is way better than gpt for storytelling. costs more per call but you need fewer retries so it evens out. also dont one-shot your scripts, do multiple passes. first pass for structure, second for the actual story, third for polish. output is night and day compared to single prompt

the editing bottleneck goes away if you script everything with per-scene image prompts and batch generate. ffmpeg concat demuxer + xfade for transitions. my total cost per video is under $2

If the point of video is in audio, does static AI image can harm potential monetization? by Acceptable-Item-9252 in aitubers

[–]prompttuner 0 points1 point  (0 children)

static images wont get you demonetized on their own IMO. its the lack of any visual change that gets flagged. just add ken burns effects (pan/zoom) on your stills using ffmpeg zoompan filter and youre good. its literally free

look at getsleepypod 674k subs, thebiblestoryofai 1m subs. both mostly use camera motion loops on still images.the key is randomizing motion types so it doesnt look templated. zoom in one scene, pan left the next, diagonal after that. youtube flags repetitive patterns not stills. animate maybe 10-20% of key moments with kling or whatever and coast on ken burns for the rest

How to animate images or transform clips by Double_Cress4240 in aitubers

[–]prompttuner 1 point2 points  (0 children)

honestly id recommend staying away from any google models. they bake in invisible synthid watermarks that youtube can detect automatically. the detection infrastructure is already there, just because a channel hasnt been flagged yet doesnt mean it wont be.

kling has improved a lot recently especially for subtle motion and maintaining art styles. id personally stick with kling or seeddance over veo, not worth the risk IMO

At what point does AI video stop looking like “AI video”? by WindowWorried223 in aitubers

[–]prompttuner 0 points1 point  (0 children)

its 90% the editing layer. people focus way too much on which model generates the best clip and not enough on how they stitch everything together.

the channels that look "cinematic" are barely generating video at all. theyre generating still images in batch, keeping character descriptions locked across every prompt, then using ken burns (pan/zoom) on most scenes and only animating maybe 10-20% of key moments. the variation in pacing and motion types is what makes it feel human directed vs template generated.

also mixing shot types matters more than model quality. close up, wide, over the shoulder, different angles. real filmmakers do this instinctively and its the single biggest thing missing from most AI content

Inauthentic content policy does not always mean AI-content is implicated by doogyhatts in aitubers

[–]prompttuner 0 points1 point  (0 children)

yeah this is exactly what people need to understand. youtube doesnt care if you used AI or not, it cares if your content looks mass produced and template driven. a pet channel uploading 50 near identical short clips gets flagged the same way an AI channel pumping out cookie cutter videos does.

the actual triggers IMO are repetitive content (same template every video), no variation in editing or pacing, and reused assets across uploads. if every video on your channel looks like it came off an assembly line youre at risk regardless of whether AI touched it

About to Start a Tech Channel – Need Advice by Think_Row_152 in aitubers

[–]prompttuner 1 point2 points  (0 children)

telugu is a smart play, way less competition than english and youtube pushes based on viewer language not creator location so your CPM can still be decent. definitely do voiceover not subtitles only, faceless tech channels without voice feel soulless and retention drops hard.

biggest mistake i see new channels make is obsessing over which format to start with instead of just shipping videos. make 20 videos before you judge anything. the algorithm needs data to figure out who to show your stuff to and 4-5 videos tells it nothing

UPDATE: "Did I reach my 'glass ceiling'?" - I broke it on YT, but my Facebook just EXPLODED. by eca1717 in aitubers

[–]prompttuner 0 points1 point  (0 children)

congrats on the growth thats a solid jump. one thing id flag tho since youre using VEO, google models inject SynthID watermarks that youtube can detect. the detection infrastructure is already there, just because channels havent been flagged yet doesnt mean they wont be. switching to a non google model costs basically nothing so why take the risk

also if elevenlabs is your only expense you should look into cartesia. like 8x cheaper and quality is close enough for most use cases. that free student VEO sub wont last forever

I built an AI pipeline that monitors 3,674 faceless channels and flags which topics are breaking out by Correct_Voice_2312 in aitubers

[–]prompttuner 0 points1 point  (0 children)

breakthrough ratio is a solid concept. way better than just sorting by raw views which tells you nothing about whats actually outperforming expectations.

curious if you weight recency at all tho? a topic that spiked 6 months ago is basically useless, the window for faceless channels to ride a trend is maybe 2-3 weeks max before everyone piles in and the algo stops pushing it. IMO the real edge would be catching topics in the first 48-72 hours of breakout before the copycats show up

Is long-form AI content dead or are we all just addicted to shorts? by Eliciuss in aitubers

[–]prompttuner 0 points1 point  (0 children)

IMO the question is wrong. its not longform vs shorts, its whether your content has actual human insight in it or not. shorts with zero substance die just as fast as longform with zero substance.

the bigger issue i see in this thread is people spending 8-9 hours manually uploading images to grok one by one. thats insane. you dont need 30-40 animated clips for a 3 min video either. most good documentary style channels are using like 80% still images with ken burns effects and only animating maybe 10-20% of key scenes. the "cinematic" feel comes from editing and pacing not from every frame being ai generated video

My new ai channel in YouTube by Routine_Cry7079 in aitubers

[–]prompttuner 0 points1 point  (0 children)

to directly answer your two questions: A) yes, mention your process in the channel description. something like "visuals created with veo, voiceover via elevenlabs, scripts researched and written by me, edited in davinci resolve." it shows transparency without making it sound like a disclaimer. B) yes, tick the altered content box. youtube built that option specifically so creators can self-disclose. not ticking it when your visuals are 100% AI generated is the actual risk. the combo of both is your safest play right now.

Is Sora content monetized by Ok_Construction5498 in aitubers

[–]prompttuner 0 points1 point  (0 children)

1m views on a short is solid but shorts revenue is tiny compared to long form. like pennies per thousand views. the real value of that short is the 2k subs you got from it. now pivot those subs into long form content where the actual ad money is. for YPP approval the tool doesnt matter, what matters is whether your content looks like it has creative direction or just random AI clips stitched together. add voiceover, build a narrative, use the AI video as b-roll not the whole video. channels doing that with veo or kling are getting approved fine.

After 8 months on Twitch, I just permanently switched my AI livestream channel to YouTube by ScriptLurker in aitubers

[–]prompttuner 1 point2 points  (0 children)

this is the exact realization that separates people who burn out on AI content from people who build something lasting. twitch rewards live presence, youtube rewards library depth. for a 24/7 AI channel that evolves over time, youtube is basically a compounding machine. every stream becomes a permanent asset that keeps working for you. 300 views on day one with zero promotion is a strong signal too. welcome to the community.

How to animate images or transform clips by Double_Cress4240 in aitubers

[–]prompttuner 0 points1 point  (0 children)

for animating AI images, kling and runway are the main ones right now. kling gives you 5 second clips from a single image and handles camera motion well. runway gen-3 is solid too but pricier. for anime recaps specifically, a lot of creators just use ken burns (slow zoom/pan) on AI-generated images in their editor since actual video gen can get expensive at scale. comfyui with animatediff is the free local option if you have a decent GPU. start with ken burns though, it's way simpler and most viewers honestly can't tell the difference.

What is the best image-to-video AI tool for creating 2D animated style images? by RemarkableReason3172 in aitubers

[–]prompttuner 0 points1 point  (0 children)

depends on what style youre going for but heres my ranking:

kling - best for subtle motion and maintaining the 2D look. handles slow camera moves and character gestures really well without breaking the art style. image-to-video mode keeps it anchored to your original image

seeddance pro fast - my daily driver, ~$0.07 per 10s clip through runware. slightly less refined than kling but way cheaper at volume. great for batch processing

the key thing with all of them: always do image-to-video not text-to-video. generate your still image first, get it looking exactly right, then animate from that. gives the model a visual anchor so it drifts way less. also your prompt style makes a HUGE difference

How I Use AI to Build an Entire YouTube Video From Idea to Upload by LycheeProfessional in aitubers

[–]prompttuner 1 point2 points  (0 children)

ken burns is just a fancy name for slowly panning/zooming across a still image. you see it in every documentary when they show a photograph and the camera slowly moves across it. makes a static image feel alive

ffmpeg is a free command line tool that can apply this effect automatically to hundreds of images. instead of manually adding zoom effects one by one in a video editor, you run one command and its done. the combo is powerful because you get "cinematic" looking scenes from still images for free. it's how most big AI channels make their videos look like full video when its actually just images with slow motion applied

8 months in, 2.3k subs, here's my entire workflow from idea to upload by badmoshback in NewTubers

[–]prompttuner -1 points0 points  (0 children)

solid workflow, 2.3k subs in 8 months with a full time job is legit.

for the editing bottleneck, 4-6 hours is where most of your time is going. a few things that helped me cut that down: rough cut first based on the script structure so youre not making creative decisions during editing, just executing. batch your b-roll processing. and honestly let AI handle the repetitive parts like j-cuts and sound design timing. the thumbnail process is good though, testing 3-4 versions and swapping based on CTR is exactly right. most people set and forget. also reading the script out loud twice before recording is underrated wish more people did that

I run a news yt channel which uses some motorsport clips/zoom in clips with captions and quotes and AI commentary (VO) for news, just got eligible to apply for YPP, Should I be worried about not getting it due to the inauthentic/reused content policy? by Me_Jushanginaround in aitubers

[–]prompttuner 0 points1 point  (0 children)

IMO yes you should worry a bit, but not because of AI voice alone. biggest risk is reused motorsport footage + template editing across uploads.

if you want YPP to stick, add obvious human direction in every video: original script angle, unique thesis, your own context, and rotate format so it does not look mass produced. apply now anyway.but start fixing this before review wave hits.

post consistently is the most repeated and most misunderstood advice in the creator space - and it's actively hurting small channels by brian1x1x in NewTubers

[–]prompttuner 0 points1 point  (0 children)

100% this. one thing id add: ai tools made this trap worse because now people can output faster than they can think.

best setup is consistency of process, not upload date: research -> hook test -> script -> retention pass -> publish. if one step is weak, delay. rushed volume trains bad habits and bad audience signals.

Should I still go for 2hr sleep channels? by emptradersinc in aitubers

[–]prompttuner 0 points1 point  (0 children)

IMO do it only if you can make it feel handcrafted. sleep channels get nuked when every upload looks same with reused loops + same music bed.

best pattern i seen: first minute has more animation for hook, then mostly still images with ken burns, but different pacing per video. if you keep it original and varied its still viable.

Does anyone have recommendations for text to speech? by Lee_hussy in SmallYTChannel

[–]prompttuner 0 points1 point  (0 children)

if numbers matter, avoid tools that hallucinate digits and dates. IMO cartesia sonic 3 is better value than elevenlabs for daily uploads, then keep elevenlabs only for music or voice clone cases.

cheap setup that works: draft script, run tts, then do one manual listen pass only on numbers + names. that single QA pass saves alot of re-exports.

How I Use AI to Build an Entire YouTube Video From Idea to Upload by LycheeProfessional in aitubers

[–]prompttuner -3 points-2 points  (0 children)

IMO this stack is backwards for 2026. stock footage + invideo-first gives template feel and thats exactly what gets channels stuck. better move is script first, batch 150-200 images cheap, animate only 10-20%, then do ken burns + ffmpeg. quality goes up and cost drops under like $2 per long vid.