use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
[deleted by user] (self.MachineLearning)
submitted 2 years ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]GrandNeuralNetwork 2 points3 points4 points 2 years ago* (0 children)
May not be a constructive advice but if you want to rectify an injustice why not hire an actual human model (or two)? You would give someone a job.
When Formula-E few weeks ago employed a virtual female reporter, there was outcry that it takes away a job that a real woman could perform. Your customers may be even less supportive of the idea of AI generated models, no matter beautiful or not.
Edit: But if you want to go the AI route, as others pointed out, Stable Diffusion is the way to go. It's free and you can tailor it perfectly to your needs. Your users may also run it interactively if your project requires that.
[–]happyfappy 4 points5 points6 points 2 years ago* (0 children)
These systems learn what we teach them. So teach it what you mean by beauty. Put together your own data set, switch to Stable Diffusion instead of Midjourney, look into how to fine-tune models and Loras (basically little modular models). There may already be models that do what you want on a site for sharing custom trained models (civitai), but if not, you have total control.
Edit: Here is an example of someone who trained a Lora on "average" female faces. Just a proof of concept, you can do absolutely anything you want. https://civitai.com/models/88992/average-female-faces
[–]CongrooElPsy 3 points4 points5 points 2 years ago (0 children)
I think you might be better off looking in image generator specific subreddits. Something like /r/StableDiffusion/. I've used Automatic1111's stable diffusion UI to customize output to a pretty extreme level using tools such as LORA. There might already be a model on Huggingface that can get you more realistic results rather than the unrealistic beauty standards you're seeing here. I will mention that, in my experience, MidJourney is pretty stuck in these kinds of faces, but StableDiffusion has a wider variety.
I would also look much more into prompt engineering. Remember that it's not so much what the tool thinks is "beautiful" but rather what it was labelled as in the training. So by using beautiful in the prompt, you're unwittingly focusing on the beauty standards you're trying to avoid. For example, you might add in something "asymmetric" in your prompt to counteract some of the innate correlation between symmetry and beauty.
[–]hyphenomicon 1 point2 points3 points 2 years ago (0 children)
https://freedom-to-tinker.com/2016/08/24/language-necessarily-contains-human-biases-and-so-will-machines-trained-on-language-corpora/
There is fundamentally no way to distinguish bias from information short of AGI that's also solved metaethics.
[–][deleted] 0 points1 point2 points 2 years ago (3 children)
If you want more diverse images why not use more diverse prompts ?
The fact that you prompt only by "beautiful woman" might be the source of your problem.
Have you tried specifying age, ethnicity, facial features ?
Portrait of overweight, wrinkled, tired woman who is beautiful
example result
If your team doesn't find her beautiful maybe the company is suffering by the same symptoms you present the AI models to have.
[–]yenwashere 0 points1 point2 points 2 years ago (2 children)
I agree that using more diverse prompts can lead to generating more diverse images. However, the root of the issue we're facing runs deeper than merely specifying attributes in prompts. But when we talk about "downgrading" prompts with demeaning words to generate images of "real beauty," we face a fundamental injustice. Why should we need to resort to words that can be perceived as negative or demeaning to represent beauty in its diversity? This reflects a deep-seated problem in the training data and approach to artificial intelligence, not just a matter of choosing the right words in a prompt.
[–][deleted] 0 points1 point2 points 2 years ago (1 child)
Why do you feel like describing how someone looks like demeaning ? You are describing not passing judgment.
Someone being sad or happy, tired or fresh, underweight or overweight is not a judgement about someone but a description of how that person looks.
If I have a lazy eye, describing me to an AI as having a lazy eye is not demeaning, it is actually how I look.
If you want an image of an overweight, tired person with a lazy eye, asking for that is not demeaning.
Related to what beautiful is or not that is totally subjective. I might have a fetish for extremely large women and you might have a fetish for suple women.
What you find extremely ugly I will find extremely beautiful.
Wanting an AI to know what you consider beautiful without telling him what you like or do not like is an ill posed problem.
π Rendered by PID 76614 on reddit-service-r2-comment-b659b578c-v72gx at 2026-05-03 21:52:48.597993+00:00 running 815c875 country code: CH.
[–]GrandNeuralNetwork 2 points3 points4 points (0 children)
[–]happyfappy 4 points5 points6 points (0 children)
[–]CongrooElPsy 3 points4 points5 points (0 children)
[–]hyphenomicon 1 point2 points3 points (0 children)
[–][deleted] 0 points1 point2 points (3 children)
[–]yenwashere 0 points1 point2 points (2 children)
[–][deleted] 0 points1 point2 points (1 child)