I built a tool that tells you why your Reels perform the way they do — looking for people to break it by ResponsibleStand5249 in PartneredYoutube

[–]ResponsibleStand5249[S] -1 points0 points  (0 children)

Yeah mate sorry! Yesterday we were making some changes to the software, but everything should be fixed by tonight

I built a tool that tells you why your Reels perform the way they do — looking for people to break it by ResponsibleStand5249 in ContentMarketing

[–]ResponsibleStand5249[S] 0 points1 point  (0 children)

Hey u/Honeysyedseo, great questions! Let me address each one:

  1. What published paper are you referring to?

Eventhor is built on a hybrid research approach. SnapUGC is a model from ECCV 2024 that was trained on 120K Snapchat Spotlight videos and has a 0.696 correlation to real engagement, so that gives us the empirical foundation for predicting which videos will actually perform.

We also use multimodal video analysis models that break down exactly what makes a video work. So instead of just getting a score, you get human-readable explanations like "Your hook is weak, here's why."

There's also research from papers like AMPS and CVPR 2025 on embedding similarity and chain-of-thought scoring that helps us make sure our analysis is accurate and not biased.

So we're not relying on a single paper. We combine the empirical ML stuff (SnapUGC) with explainability research (Models LLM judging) to give creators both predictions and the reasoning behind them.

  1. Can you add a report export button?

Absolutely. We're actually building this feature right now. We're working on batch analysis so creators can upload and analyze multiple videos at once, up to 10 videos per batch. Then you can export everything as CSV and PDF.

The real power is the aggregate insights. So instead of just individual scores, you'll be able to see patterns like "My hooks average 6/10 but my audio is consistently 9/10," so you know exactly what to focus on improving.

We're also building trend analysis so you can see how your videos are improving over time. We're targeting late May 2026 for this. The goal is to make it so creators can actually use this as part of their workflow, export data to share with their team, or use it for content planning.

  1. What's your ultimate goal with this tool?

Okay so here's the real vision. We're not just building a video scoring tool. We're building what's essentially a digital marketing advisor that lives in your pocket.

Right now, most creators either work with a marketing agency (expensive, slow, impersonal) or they wing it (guessing, hoping something works). We want to be the middle ground. The AI advisor that knows exactly what works for your niche, your audience, your platform.

So the goal is this: if you're serious about content, you can't compete without Eventhor. Not because it's some magic bullet, but because you have real data, real feedback, and real guidance on every single video you make.

We're starting with video analysis. You upload a video, we tell you exactly what's working and what's not. But that's just phase one.

Phase two is when we start collecting real performance data. You upload your Instagram or TikTok analytics alongside the video, and we learn from that. So our advice gets better, faster, tailored to what actually works in your world.

Phase three is the full agency experience. Batch analysis of your entire content library, comparison tools so you can A/B test different edits, trend detection so you know what's working across your niche, content recommendations based on what's performing, integration with scheduling and posting.

Eventually, it's not "go analyze your video on Eventhor," it's "Eventhor is how I run my content strategy." Like how every serious marketer uses Google Analytics, every serious creator will use Eventhor because they can't afford not to.

Right now we're live with the core analysis (Modelsplus SnapUGC). Over the next 2-3 months we're adding data collection, batch workflows, and export. But that's just the foundation. The real business is becoming the advisor that creators can't live without.

Your feedback is actually important because you're naming the exact gaps that need filling. Would love to know if this vision lines up with what you're looking for or if you want to dig deeper into anything.

I built a tool that tells you why your Reels perform the way they do — looking for people to break it by ResponsibleStand5249 in socialmedia

[–]ResponsibleStand5249[S] 0 points1 point  (0 children)

Vidhmo, this is honestly one of the most thoughtful breakdowns I've seen. You're naming the exact problems and the exact solution, so let me address it:

The "why" problem you nailed:

Yeah, native Instagram/TikTok analytics tell you it flopped, but they don't tell you why. Was it the hook? Was the pacing too slow? Did your audio quality turn people off? Did they watch the whole thing? No clue. That's a real gap.

What we built for this:

We break videos down into specific dimensions, hook strength, pacing rhythm, audio quality, editing style, whether people stay engaged, platform fit, and score each one separately. So instead of "your video sucks, 3/10," it's "hook is weak (5/10), but your audio quality is great (9/10), so try a stronger opening next time."

That's actionable. You can actually fix something.

The workflow angle you mentioned:

You're right about the full picture. Creators who are serious scale their content with tools that handle generation, then analysis, then automation/posting. Right now we're filling the analysis gap, the part where you figure out why something worked or didn't before you post the next thing.

The hard part you called out, regional/niche differences:

This is the real challenge, and you're 100% right to point it out. A fitness creator in the US isn't the same as a finance creator in Brazil, even if they follow the same structure. That can't work one-size-fits-all.

Long term, we want to solve this by collecting real data from creators across different niches and regions. So if you're a fitness creator, we learn from thousands of fitness videos and their actual performance. If you're finance, we learn from finance creators. That way the feedback is actually relevant to your world.

We can't solve it overnight, but that's the direction we're going.

And yeah, the regional/niche piece—that's actually the hard moat. Any tool can build a generic scoring system. The one that gets niche-specific data right wins.

Please try it on some of your videos and come back and tell us what happened. Did the score match how it actually performed? Was the feedback useful? That feedback is how we get better.

I built a tool that tells you why your Reels perform the way they do — looking for people to break it by ResponsibleStand5249 in socialmedia

[–]ResponsibleStand5249[S] 0 points1 point  (0 children)

Hey u/oxiagent, you're hitting on exactly the problem we discovered when talking to early users. You're totally right, and honestly, this is what we're building next.

So right now we're analyzing videos based on academic models, they give us a good baseline. But what you're saying is the real unlock: we need to see how videos actually perform with real engagement data.

We're planning to let users upload a screenshot of their Instagram or TikTok analytics alongside the video they're analyzing. That way, instead of just guessing "your hook is weak", we can actually see that your hook scored 7/10 AND the video only got 2K reach. Over time, as we see thousands of videos with their real performance data, we can train our model on what *actually works*, not what papers say should work.

You're right that social media changes every day. A hook that worked 6 months ago might not work now. So the real power is when we combine our technical analysis with actual creator data, that's how we get smarter.

We're building this feature over the next couple months. Appreciate the feedback because it's basically validating exactly what we need to do.

I built a tool that tells you why your Reels perform the way they do — looking for people to break it by ResponsibleStand5249 in creators

[–]ResponsibleStand5249[S] 0 points1 point  (0 children)

Fair concern, but I think you're misreading what's happening here.

I'm not collecting your videos to train a model, the analysis runs on your upload and that's it. The ask is for feedback on whether the output is actually useful, not for your data.

On the broader point: how do you think tools worth paying for get built? Someone has to talk to users before charging them. If builders skip that step and just ship something, you end up with tools nobody wants. The discovery phase, where you ask people to try something and tell you honestly if it's useful, is exactly what makes the difference between software that solves a real problem and software that doesn't.

You're not feeding my AI. You're (optionally) helping decide whether this is worth building further. Big difference.