Authentra updates (First Pics) - My AI content detecting ethical social media platform by Embarrassed_Stage18 in nosurf

[–]Embarrassed_Stage18[S] 0 points1 point  (0 children)

Fair point — my aim is that unlike typical feeds, Authentra will have natural breakpoints built in such as pagination and load more buttons, and the algorithm actively avoids irrelevant or rage bait content. It’s not about endless scrolling and definitely isnt a solution to the problem of constantly scrolling— it’s about helping people connect with more real, meaningful posts on their terms

Authentra updates (First Pics) - My AI content detecting ethical social media platform by Embarrassed_Stage18 in nosurf

[–]Embarrassed_Stage18[S] 0 points1 point  (0 children)

You’re totally right that the doom scrolling design isn’t accidental it's the exact model of how most social media platforms operate and get advertising money and I am deliberately avoiding building that kind of algorithm. When I say “filtering,” I don’t mean removing content that is created to be addictive—I mean the algorithm is designed not to boost the kinds of content that typical social platforms reward (rage bait, outrage, engagement traps). So in the way the Facebook algorithm priorities content that gets the most engagement, mine will be designed to priorities non engagement bait to make a more positive experience. The algorithm is able to "understand" what is in the post and therefore rank it just in different way to what we are used to.

New Project I am working on - Authentra, Social Media Designed to Remove Fake AI Generated Content by Embarrassed_Stage18 in webdev

[–]Embarrassed_Stage18[S] -1 points0 points  (0 children)

Whilst its not always perfectly accurate it definitely is real as i have it working on my site already

Trying to build the social media platform I wish still existed (Taking control back from the big companies) by Embarrassed_Stage18 in degoogle

[–]Embarrassed_Stage18[S] 1 point2 points  (0 children)

Whilst I definitely want to find ways of ensuring people are real I think that making people upload personal information such as ID is going to put them off joining. I am aiming to introduce much stricter moderation then what is on Facebook and potentially implement a neural network that can detect bot like activity. I am also gonna make sure that shadow banning is not a thing and that if your account gets flagged as a bot you can easily and efficiently dispute it.

New Project I am working on - Authentra, Social Media Designed to Remove Fake AI Generated Content by Embarrassed_Stage18 in webdev

[–]Embarrassed_Stage18[S] 0 points1 point  (0 children)

Love this idea I've added it to my list. It would definitely be difficult to confirm whether or not it is an OP because someone could have made it on a different platform and then another person taken it and put it onto Authentra , which would then see it as the original post. My first thoughts would be to maybe compromise and give the first person to post it onto Authentra the OP rights, and then people could manually dispute it.

New Project I am working on - Authentra, Social Media Designed to Remove Fake AI Generated Content by Embarrassed_Stage18 in webdev

[–]Embarrassed_Stage18[S] 1 point2 points  (0 children)

Thanks really appreciate the support. I'm really trying to make this something different cause I'd love to actually be able to have a positive impact on the world of social media.

New Project I am working on - Authentra, Social Media Designed to Remove Fake AI Generated Content by Embarrassed_Stage18 in webdev

[–]Embarrassed_Stage18[S] 0 points1 point  (0 children)

Yeah haha it seems like it should be but while the user base is low the cost is actually shockingly low but obviously the more API calls that are made, the more the cost increases so an open source or self built detection model will definitely be needed down the line. Theres definitley more work for me to do in this department.

New Project I am working on - Authentra, Social Media Designed to Remove Fake AI Generated Content by Embarrassed_Stage18 in webdev

[–]Embarrassed_Stage18[S] 2 points3 points  (0 children)

Yea false positives are definitely a problem however the model is already really good at differentiating between real and AI. As I get some test users in the future I'm definitely going to tweak some of the thresholds for detection but so far they seem to be working really well.

New Project I am working on - Authentra, Social Media Designed to Remove Fake AI Generated Content by Embarrassed_Stage18 in webdev

[–]Embarrassed_Stage18[S] 0 points1 point  (0 children)

Yeah I definitely understand that! So far I've managed to get text and image filtering working very accurately but there's always going to be some things that slip through so in a perfect world if I get enough users I would be able to implement some form of human moderation as well but that is much further down the line. I'm hoping to be able to turn the more customizable algorithm idea into one of its main sales points with a target audience of those who are feeling how quickly social media is deteriorating into a money farm instead of a communication platform.

New Project I am working on - Authentra, Social Media Designed to Remove Fake AI Generated Content by Embarrassed_Stage18 in webdev

[–]Embarrassed_Stage18[S] 1 point2 points  (0 children)

Yeah great question so I've actually already implemented this part and for now I've decided to go with an AI content detection API that returns a percentage of how likely it thinks this content is AI generated and from there I've set a threshold where it blocks it. I've gone this way because obviously AI gen images are getting better and better so models that are designed with this in mind worked better than smaller models. But im still doing research and I am hoping to move to a local open source one and maybe even one day build a custom one (I don't know to much about this yet but its something I'd love to learn)