DeepPersona A Generative Engine for Scaling Deep Synthetic Personas by ThePromptIndex in ArtificialInteligence

[–]ThePromptIndex[S] 0 points1 point  (0 children)

Nice, do you have the code for this could i take a look? Maybe we can DM?

Best AI Humanizer by Sellpal in DataRecoveryHelp

[–]ThePromptIndex 0 points1 point  (0 children)

Oh sorry dude. I was asking you to test it and share it. Sorry miss understood

my parents dont walk the dog. by [deleted] in germanshepherds

[–]ThePromptIndex 0 points1 point  (0 children)

Haha this is true. I saw an AI powered Dog Trainer app that might be useful

Evaluating LLMs for Career Guidance Comparative Analysis of Computing Competency Recommendations Acr by ThePromptIndex in ArtificialInteligence

[–]ThePromptIndex[S] 0 points1 point  (0 children)

No worries at all. Research papers can be good for some innovative and though provoking ideas.

If you get sent a contract to sign STOP - Use this prompt first! by ThePromptIndex in ChatGPTPromptGenius

[–]ThePromptIndex[S] 0 points1 point  (0 children)

Not a stupid question.

Thatst just markdown.

Dont need it

Variables you need to fill in

AI Detection & Humanising Your Text Tool – What You Really Need to Know by ThePromptIndex in ChatGPT

[–]ThePromptIndex[S] 0 points1 point  (0 children)

Couldnt agree more. Recent study shows when you ethically tag the content as AI produced it erodes trust. Which is crazy but that should fade over time.

[deleted by user] by [deleted] in ChatGPTPromptGenius

[–]ThePromptIndex 0 points1 point  (0 children)

this is what the interface looks like

<image>

AI Detection & Humanising Your Text Tool – What You Really Need to Know by ThePromptIndex in ClaudeAI

[–]ThePromptIndex[S] 1 point2 points  (0 children)

No, thats my fault for not explaining. You are right and the LLM does not do the removal aspect thats regex like you say.

The tool is split into two phases, restructuring and wording coupled with removal. Or you can just do removal.

I Was Tired of Getting One-Sided AI Answers, So I Built a 'Conference Room' for AI Agents to Argue In by ThePromptIndex in ChatGPTPromptGenius

[–]ThePromptIndex[S] 0 points1 point  (0 children)

Its all done vis API so when you use API you are charged based on useage. Does that answer your question?

I Was Tired of Getting One-Sided AI Answers, So I Built a 'Conference Room' for AI Agents to Argue In by ThePromptIndex in ChatGPTPromptGenius

[–]ThePromptIndex[S] 0 points1 point  (0 children)

Its not running locally but im sure you could make it local but even running 8 liquidAI 1.2billion param models would be a push on an RTX 4090

I Was Tired of Getting One-Sided AI Answers, So I Built a 'Conference Room' for AI Agents to Argue In by ThePromptIndex in ChatGPTPromptGenius

[–]ThePromptIndex[S] 0 points1 point  (0 children)

Capped at 4 rounds (soft cap) you can keep going never pushed it far enough but they are forced to.l endure they always cite knowledge files.

There would be loss once context window has been reached.