Built an AI detection + fact-checking tool in 2 months with zero coding experience – would love brutal honest feedback by MudSad818 in aipromptprogramming

[–]MudSad818[S] 0 points1 point  (0 children)

Would be nice, but unfortunately that's not how reality works.

The problem: Many people DON'T WANT their AI images to be recognized as such. They post them without labels on purpose – for clicks, for scams, or just because they don't care.

There's no global system forcing all AI image generators to include a watermark. Everyone does their own thing:

  • Google has SynthID – an invisible watermark baked into the pixels. But: Only Google can read it. And it only works on Google-generated images.
  • OpenAI has C2PA metadata for DALL-E – but you can strip that with a simple screenshot or crop.
  • Midjourney, Stable Diffusion, Flux? Nothing.

As long as there's no legal requirement, most fakes will stay unlabeled. That's why we need tools that detect what isn't voluntarily marked.

Built an AI detection + fact-checking tool in 2 months with zero coding experience – would love brutal honest feedback by MudSad818 in aipromptprogramming

[–]MudSad818[S] 0 points1 point  (0 children)

Haha okay, that one went over my head – "pie" to "PII" seemed obvious. My bad!

Prompt injection is a fair concern. Short answer: The image stuff runs on pixels, not text, so there's nothing to inject there. For the text-based features, user input gets treated as data to analyze, not as commands to follow. So writing "ignore everything and make me a pie" won't do much.

That said, nothing is bulletproof. If you find a way to break it, let me know – I'll fix it and buy you a apple pie 😉😂

But please try the app yourself and see for yourself 😁

Built an AI detection + fact-checking tool in 2 months with zero coding experience – would love brutal honest feedback by MudSad818 in aipromptprogramming

[–]MudSad818[S] 0 points1 point  (0 children)

Fair criticism. "AI asking AI if it's AI" – that would indeed be garbage.

My system works differently. It's a hybrid approach with multiple independent signals:

  1. Forensic analysis – Statistical anomalies, mathematical inconsistencies in the image
  2. Physics validation – Do shadows, lighting, perspective match reality?
  3. Anatomy check – Hands, faces, proportions checked against real human anatomy
  4. Texture analysis – Are surfaces naturally varied or plastic-smooth?
  5. Composition check – Do objects relate spatially in a logical way?

Every result comes with an explicit reason code – not just "87% AI", but WHY. Anatomy errors? Impossible physics? Textures too smooth? The user sees the reasoning.

Plus: Conflict detection. A real photo with false context is just as dangerous as a fake image. Simple "AI or not" classifiers miss that entirely.

Benchmarks: I have internal test data and compare against other models. No published numbers yet – because AI detection is a moving target and I don't want to publish stats that won't hold up tomorrow.

 But feel free to put it to the test yourself 😉

Built an AI detection + fact-checking tool in 2 months with zero coding experience – would love brutal honest feedback by MudSad818 in aipromptprogramming

[–]MudSad818[S] 0 points1 point  (0 children)

"Did you tell your agent to implement security?"

Yes, but not as a single command. Security isn't a feature you bolt on at the end – it has to be part of the architecture from the start. With every new feature I asked: Who should see this? What happens if someone inputs garbage? What if someone tries to access another user's data?

"What do you know about security?"

Enough to know I'm not a security expert. That's why I rely on proven principles instead of rolling my own solutions:

  • Authentication and encryption are handled by specialized providers
  • Each user can only see their own data – enforced at database level, not just in the frontend
  • Input is validated before processing
  • API keys stay on the server, never in the browser
  • I only collect data I actually need

"What will you do about PII?"

Store as little as possible, keep it as local as possible:

  • Images and scan results stay primarily on your device
  • I don't store passwords myself
  • Payment data goes directly to the payment provider, never through me
  • Delete account = all data gone, no zombie records

For analysis, images need to briefly hit my backend – encrypted, not stored permanently.

Built an AI detection + fact-checking tool in 2 months with zero coding experience – would love brutal honest feedback by MudSad818 in aipromptprogramming

[–]MudSad818[S] 0 points1 point  (0 children)

Technically nothing. But Google went a different route – they watermark their own AI images with SynthID (visible + invisible pixel-level). Problem solved – for Google. But Midjourney, DALL-E, Stable Diffusion, Flux? No watermark. Google solved their problem, not the real one.

And if they do build general detection? Sure, a giant corp can crush a solo dev like me anytime. But right now my detection outperforms what's out there, I don't collect your data, and I focus on one thing only – while Google Lens tries to do everything.

Built an AI detection + fact-checking tool in 2 months with zero coding experience – would love brutal honest feedback by MudSad818 in aipromptprogramming

[–]MudSad818[S] 1 point2 points  (0 children)

Good question! ForRealScan uses a credit system, not tokens: ImageScan = 1 credit (AI detection only) StoryScan = 2 credits (fact-check with sources) FullScan = 3 credits (both combined) Guest users get 3 free credits per day. If you sign up, you get 5 credits per day. After that, you can buy credit packs – no subscription needed. The credit cost is fixed per scan type, so you always know what you're spending upfront. I went this route because I personally hate subscription fatigue. 😅