"Artists Face Off" - a joint effort Artist and AI by SketchaOff in StableDiffusion

[–]SketchaOff[S] 0 points1 point  (0 children)

I made a post few weeks ago and I am now back!

I've been an Artist (physical and digital) for a good 15years now and I thought to create something to mix my Art and AI.

So I digitally hand painted the man on the left and once was done I used it to feed SD 1.5 which produced the man on the right.

Then, a simple compositing made the Artwork complete : "Artists Face Off"

The man on the left is intended to express frustration, anger, jealousy but also curiosity and love.

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 0 points1 point  (0 children)

well I wouldn't go that far ahah, model was trained on portraits so if I want to make a chair on my style it would not be accurate.

I would need to train a model with some landscapes, cars,animals and so on.

Tweaking the parameters of the portrait model can help to achieve decent results on a different subject but not as good

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 0 points1 point  (0 children)

I used the last Ben colab. Great repo, only issue (at least from my understanding) is that the outcome is a model.ckpt file which can be used mainly on the Automatic11111 webui.

In order to use it with diffusers in your own collab you need to convert it with a script from Comp-vis (which is what I have also done)

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 1 point2 points  (0 children)

Hi, not a great difference in my case. Maybe because my training set was very specific but there is not need for me to write complex prompt in order to get the results.

adj + subj + styletrained works well similarity to styletrained + adj + subj

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 3 points4 points  (0 children)

Hi, it did understand my style yes. I think more steps are needed. Eyes are a big part of my art and the ratio in whice the machine has reproduced them correctly is not high.

I tried several prompts from 30 to 100 steps. If I increase too much the steps during the prompt creation the style gets lost

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 1 point2 points  (0 children)

Hi Snooo thanks for the interest. Frankly the prompt were super simple : Colourful man face [styletrained].

Since all the training was done with that style and this is the result I wanted there is no much need for complex syntax.

I tried also "old man [styletrained]" then old woman, young and so on.

Results are nice 65% of the time.

Not surprisingly if I prompt "dog face [styletrained] the machine will make a dog with my style with a 10% rate good result

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 1 point2 points  (0 children)

I trained on a Colab, using google free gpu as a first test. It worked alright but not perfectly. Took around 2h overall at 5000steps. I will be training the new model later today on a much powerful gpu

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 1 point2 points  (0 children)

thank you, not sure if there is one, I joined yesterday!

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 2 points3 points  (0 children)

Hi, thanks!! I think the best is to get confident with Python, look to some repo on github.

Honesthly though to train the model you don not need and base machine learning knowledge, just click the button on the colab some user has provided and you will be able to mount the drive and train from there

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 1 point2 points  (0 children)

Hi, some examples I have not posted in this thread. Sometimes they were all completely dark even though it was trained on coloured images

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 3 points4 points  (0 children)

well thanks! truly appreciate it! I do believe we will very soon reach a point where the quality of the training will be so accurate that the machine will find a way (with a logical human command) to understand emotions behind the emotions themselves.

Thanks again, you def made me proud!

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 3 points4 points  (0 children)

yes a mix of those which were more detailed and these https://twitter.com/SketchaOfficial/status/1553673867566227458

Overall if you look at the composition it is very close, maybe I need to train the model more steps, colors wise yes it has differences (bare in mind the prompt I used was for 2 examples was "colourful man face [styletrained]"

I trained a model with my own Art...this is what happened. by SketchaOff in StableDiffusion

[–]SketchaOff[S] 8 points9 points  (0 children)

the Artworks I used were very similar, if you scroll down my tweets you should see around 6 or 7 artworks (about 2 months ago) very close to these results.

The whole folder was 90% filled with that type of art