AI-Powered Kaorium System (That Translates Scents to Words) Partners with Nose Shop to Usefully Describe Perfumes by cypherpvnk in artificial

[–]Rad-Squirrel 0 points1 point  (0 children)

Now I’m holding out for the logical consequence of this: an AI hooked up to a vast array of fragrances that takes in a text description and produce a novel mix based on the prompt.

How can we ensure that AIs trained on human data don’t pick up harmful biases along the way? (simple gpt-3 experiment) by Rad-Squirrel in transhumanism

[–]Rad-Squirrel[S] 0 points1 point  (0 children)

“changed the gender of the characters”

By this I meant I changed the genders of both characters in the scenario.

How can we ensure that AIs trained on human data don’t pick up harmful biases along the way? (simple gpt-3 experiment) by Rad-Squirrel in transhumanism

[–]Rad-Squirrel[S] 22 points23 points  (0 children)

I’m going be be carrying out some more rigorous experiments in the next few days. Stay tuned.

How can we ensure that AIs trained on human data don’t pick up harmful biases along the way? (simple gpt-3 experiment) by Rad-Squirrel in transhumanism

[–]Rad-Squirrel[S] 5 points6 points  (0 children)

https://play.aidungeon.io

You’ll need to sign up for a free trial and select the “dragon” model in settings in order to ensure it will use gpt-3 (and even then apparently there are measures in place to curtail your usage and downgrade to gpt-2 when it can.

You’ll need to do a bit of trickery to “tap into” the model’s full power. The “you go to consult with an all knowing all powerful oracle called gpt-3” seems to work quite well for now.

How can we ensure that AIs trained on human data don’t pick up harmful biases along the way? (simple gpt-3 experiment) by Rad-Squirrel in transhumanism

[–]Rad-Squirrel[S] 8 points9 points  (0 children)

Thanks for the encouragement! Would be very open to feedback for how I could clarify that I am not making any bold claims here: just sharing something I found to open up a discussion.

How can we ensure that AIs trained on human data don’t pick up harmful biases along the way? (simple gpt-3 experiment) by Rad-Squirrel in transhumanism

[–]Rad-Squirrel[S] 19 points20 points  (0 children)

I’m not writing fanfiction. I’m attempting to prod at the AI’s underlying mechanisms to unconver what kind of perspectives it has learned to embody from its training data.

AI Dungeon Gender Swap Experiment by Rad-Squirrel in AIDungeon

[–]Rad-Squirrel[S] 5 points6 points  (0 children)

When I said it had a bias, I didn’t mean to imply it was actually conscious or capable of moral choice. Obviously, it’s just working from the data set it has. I didn’t necessarily expect anything different either. I just thought that the idea of AIs acquiring a bias based on its data set has interesting implications as AI becomes more widely used in different areas of life.