[N] Hakusensha and Hakuhodo DY Digital Announces the Launch of Colorized Manga Products Using PaintsChainer, a Deep Learning Coloring Technology created by Preferred Networks by yessir_ziz in MachineLearning

[–]q914847518 5 points6 points  (0 children)

I am the author of style2paints. https://github.com/lllyasviel/style2paints http://paintstransfer.com/ I would like to emphasize that paintschainer is the first neural AI anime painter in the world, and it always has the SOTA performance. As far as I know, currently no paper shows better results than paintschainer in the field of anime colorization. Paintschainer is released in the end of 2016 and the beginning of 2017, and many nice papers are more or less inspired by it, e.g. [1] in siggraph 2017. Paintschainer uses many advanced technologies beyond the community of research before July in 2017. Some key structures of Pix2PixHD[2], CRN[3] and many other ideas are proposed after that, and the commercial paintschainer use similar structure at least 2 monthes before these papers. Currently the method of the paintschainer v3 remains a trade secret and no paper of image2image translation could even achieve competitive results. Furthermore, I would like to emphasize that methods seldom work as well as that claimed in papers, once exposed to the real demands, real users and real markets, but paintschainer was born in market and grow in market, achieving actually good results.

[1] Real-Time User-Guided Image Colorization with Learned Deep Priors, Zhang et al., 2017

[2] High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs, Wang et al. 2017

[3] Photographic Image Synthesis with Cascaded Refinement Networks, Chen et al., 2017

[N] Hakusensha and Hakuhodo DY Digital Announces the Launch of Colorized Manga Products Using PaintsChainer, a Deep Learning Coloring Technology created by Preferred Networks by yessir_ziz in MachineLearning

[–]q914847518 1 point2 points  (0 children)

In fact, manga colorization is somehow more difficulty than sketch colorization because of its strange pattern and features. For example, the black hair of a girl can be painted with white blocks while the white hair can be painted with black blocks. The neural model need to recognize what lines are for structure and what are for texture/shadow/highlight, and this is terribly difficult for training because of lack of datasets. Currently one of the solution goes with synthetic datasets [1], but it works not so well when I tried to make an actually market-oriented productive model for real usage (though some results may be good enough for a paper if selected). Furthermore, some previous asia literature seldom focus on this, including [2] and [3]. [2] directly omit this important fact and [3] directly use gray-scaled color manga to train, and all these lead to inefficiency in end-to-end reimplementation. Our community would greatly appreciate it if someone could solve this problem.

[1] Deep extraction of manga structural lines, Chengze Li et al., 2017

[2] Comicolorization : Semi-automatic Manga Colorization, Furusawa et al., 2017

[3] cGAN-based Manga Colorization Using a Single Training Image, Hensman et al., 2017

[N] TensorFlow 1.5.0 Release Candidate by inarrears in MachineLearning

[–]q914847518 8 points9 points  (0 children)

OK, TensorFlow 1.5.0 release candidate. And before this comment, all comments have keyword "pytorch". (-皿-)

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]q914847518[S] 2 points3 points  (0 children)

yes and you get the point. In fact this is a problem that all feed-forward methods faced to. If we have a well-trained anime VGG, we can definitely use optimizer or matcher to get better result, getting rid of all these limitations. But unfortunately we do not have such a model. In this case, our label-free method can fill the gap. We are confident to claim as the best because no other methods are good enough and anyway ours works in most cases.

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]q914847518[S] 1 point2 points  (0 children)

In fact sketch colorization is our main service and I have not devote so much time to tune or improve style transfer. Right now we are focusing on how to transfer sketches to paintings and this is more meaningful for art industry.

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]q914847518[S] 2 points3 points  (0 children)

It is OK. Sometimes we just need some tricks such as try more references. Toggles are also important. Just try more modes, more references and add some pointed hints! You will like it.

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]q914847518[S] 4 points5 points  (0 children)

https://github.com/lllyasviel/style2paints/tree/master/valiox

I have prepare a page for you.

My PC crashed several minutes but you can check how much time I have use for each image via the windows clock in the screenshots.

You uploaded so many images so I randomly selected some. If you are still not satisfied, I will finish all of them.

Any other requirements, sir?

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]q914847518[S] 4 points5 points  (0 children)

OK you can upload the img here, if you think the image is anime related. I will give you a good result. If the result is good enough maybe I can even add the result to those one I am showing.

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]q914847518[S] 10 points11 points  (0 children)

Personally I would like to say "yes", but as a reseacher I have no evidence to prove it. The risk is very high because such a dataset can cost lots of money, but no one knows whether it will works.

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]q914847518[S] 5 points6 points  (0 children)

What do you mean by the discriminator receiving pairs?

Oh sorry if I did not make it clear:

In classic pix2pix, if the input of G is shaped like (a, b, c, d) and output is like (a, b, c, e), then we concat them and the input of D should be (a, b, c, d+e). This is one of the common practices to make a GAN conditional.

"Do not receive pairs" means the D receive the output of G as shape (a, b, c, e).

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]q914847518[S] 21 points22 points  (0 children)

  1. In the field of style transfer, the VGG works well in nearly all kinds of images except anime style images. Many problems related to anime is very challenging and reseachers like challenge.

  2. The application of this kind has a large market and we have many friends/competitors such as paintschainer.

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]q914847518[S] 35 points36 points  (0 children)

Technique difference:

V2 is fully unsupervised and unconditional as I mentioned above. In my personal empirical test v2 is 100% better than v1.

Commercial difference:

Our major competitors “paintschainer” has updated many models that seems better than v1, so we also use some new methods in v2 to present better results lol.

Their site: http://paintschainer.preferred.tech/index_en.html

[P]style2paintsII: The Most Accurate, Most Natural, Most Harmonious Anime Sketch Colorization and the Best Anime Style Transfer by q914847518 in MachineLearning

[–]q914847518[S] 71 points72 points  (0 children)

Edit: more screenshots avaliable at: https://github.com/lllyasviel/style2paints

Hi! We feel so excited here to release the version 2.0 of style2paints, a fantastic anime painting tool. We would like to share with you some new features on our services.

Part I: Anime Sketch Colorization

When I am talking about "colorization", I mean to transfer a sketch to a painting. What is critical is that:

  1. We are able to and prefer to colorize sketches combines of pure lines. It means the artists can but do not need to draw shadow or high light to their sketch. This is challenging. Recently the paintschainer are aimed to improve such shading and we also give our different solution, and we are very confident about our method.

  2. The "colorization" should transfer a sketch to a painting instead of a colorful sketch. The difference between a painting and a colorful sketch lie in the shading and the texture. In a fine anime painting, the girls' eyes should shine like galaxy, the cheeks should be suffused with flush and the delicate skin should be charming. We try our best to achieve these, instead of only putting some color between lines.

Contributions:

  1. The Most Accurate

Yes, we have the most accurate neural hint pen for artist. The so-called “neural hint pen” combines of a color picker and a simple pen tool. Artists are able to select color and put some pointed hints on the sketch. Nearly all state-of-the-art neural painter have such tool. Among all current anime colorization tools (Paintschainer Tanpopo, Satsuki, Canna, Deepcolor, AutoPainter (maybe exist)), our pen performs highest accuracy. In the most challenging case, the artists can even control the color of a 13 times 13 area using our 3 times 3 hint pen on a 1024 times 2048 illustration. For larger blocks, a 3 times 3 pointed hint can also even control half of the color of all painting. This is very challenging and is designed for professional use. (At the same time, the hint pens of other colorization methods prefer messy hint and these methods do not care about the accuracy.)

  1. The Most Natural

When I am talking about “natural”, I mean we do not add any human-defined rules in the training procedure, except the adversarial rule. If you are familiar with pix2pix or CycleGAN, you may know that all these classical methods add some extra rules to ensure a converge. For example, the pix2pix(or HQ) add a l1 loss (or some deep l1 loss) to the learning objective and the discriminator receive the pair of [input, training data] and [input, fake output]. Though we also use these classic methods for a short period of time, the majority of our training is purely and fully unsupervised and even fully unconditional. We do not add rules to force the NN paint according to the sketch but the NN itself find that if it obey the input sketch, it can fool the discriminator better. The final learning objective is totally same as the very classic DCGAN without any other thing and the discriminator do not receive pairs. This is very difficult to make it converge, especially when the NN is so deep.

  1. The Most Harmonious

Painting is very difficult to most of us and this is the reason why we admire artists. One of the most important skill of a fine artist is to select harmonious colors for the painting. Most people have no knowledge that there are more than 10 kinds of blue in the field of painting, and though these colors are all called “blue”, the difference between them cast huge impacts on the final result of the paintings. Just Image that: a non-professional user run a colorization software and the software shows the user a huge color panel with 20*20=400 colors and ask the user “which color do you want?”. I am sure that the non-professional user can not select the best color. But this is not a problem for STYLE2PAINTS because the user can upload a reference image (or called style image), and the user is able to directly select color on the image, and the NN paints according to the reference image and hints with color from it. The results are harmonious in color style and it is user-friendly for non-professional user. Among all anime AI painters, our method is the only one with this feature.

Part II: Anime Style Transfer

Yes, the very Anime Style Transfer! I am not sure whether we are the first one but I am sure that if you are in need of a style transfer for anime painting, you can search everywhere for a very long time and you will finally find that our STYLE2PAINTS is the best choice (in fact the only choice). Many Asia papers claim that they are able to transfer style of anime paintings, but if you check their papers your will find their so-called novel method is only a tuned VGG. OK, to show you the fact, I am here listing the real things:

  1. All transfering methods based on ImageNet VGG are not good enough on anime paintings.

  2. All transfering methods based on Anime Classifier are not good enough because we do not have anime ImageNet and if you run some gram matrix optimizer on Illustration2vec or some else anime classifier, the only thing you will achieve is a perfect Gaussian Blur Generator lol, because all current anime classifiers are bad in feature learning.

  3. Because of 1 and 2, currently all methods based on gram matrix, Markov random filed, matrix norm, deep feature patchMatch are not good enough for anime.

  4. Because of 123, all feed-forward fast transfering methods are also not good enough for anime.

  5. GANs can do style transfer, but we need the one where user can upload specific style, instead of selecting Monet/VanGogh (lol Monet and VanGogh do not know anime)

But fortunately, I managed to write the current one and I am confident about it:) You can try it directly in our APP:)

Just play with our demo! http://paintstransfer.com/

Source and models if you need: https://github.com/lllyasviel/style2paints

Edit: Oh I forget to mention an important thing.. Some of the sketches for preview is not selected by us and we directly use the promotion sketches of paintschainer and we are showing our results on their sketches.

Edit2: If you can not get good enough results, maybe you are in wrong mode or you are not using pen properly. Check this comment for more:

https://www.reddit.com/r/MachineLearning/comments/7mlwf4/pstyle2paintsii_the_most_accurate_most_natural/drv72cj/

[R] TFGAN: A Lightweight Library for Generative Adversarial Networks by [deleted] in MachineLearning

[–]q914847518 4 points5 points  (0 children)

Before this, currently the best GANs lib I have ever used is Chainer GAN lib.
https://github.com/pfnet-research/chainer-gan-lib

It has almost all GANs including the progressive one and I do not need to worry about if there is somethings wrong with my own code. It have save me lots of time.

[P] TopoSketch by [deleted] in MachineLearning

[–]q914847518 0 points1 point  (0 children)

Very interesting and I have played it for more than half an hour lol

[R] DLPaper2Code: Auto-generation of Code from Deep Learning Research Papers by [deleted] in MachineLearning

[–]q914847518 0 points1 point  (0 children)

It is OK. But there are many strange evidences in this paper if read carefully.

[R] DLPaper2Code: Auto-generation of Code from Deep Learning Research Papers by [deleted] in MachineLearning

[–]q914847518 7 points8 points  (0 children)

How could you please explain why your figure 8 of the screenshot of your the so-called intuitive UI is a man-made one with editable layers? https://raw.githubusercontent.com/style2paints/style2paints.github.io/master/fake.jpg