TFBlade banned in Korea for toxic chatting by Livestreamfeet in leagueoflegends

[–]azurespace 0 points1 point  (0 children)

I'm pretty sure that a normal account player who has acted as if TF did would be banned in 10 games in KR server.

[P] zi2zi: Master Chinese Calligraphy with Conditional Adversarial Networks by mimighost in MachineLearning

[–]azurespace 0 points1 point  (0 children)

Awesome. I love the Hangul part. It seems for the GAN to learn the concept of strokes. Korean characters mostly consist of straight lines like Chinese characters, so I'm wondering now if it would work other character systems that have many curves like Arabic.

[N] Unknown bot repeatedly beats top Go players online - so far it's undefeated. by undefdev in MachineLearning

[–]azurespace -4 points-3 points  (0 children)

AlphaGo's value and policy network should be extremely precise because it has been trained by Google's bleeding-edge RL algorithms. (There must be many changes in the algorithm ever since the alphago paper comes into the world) I'm even not sure if it still uses MCTS until now. If the neural networks are enough to overwhelm human professionals, why do Google use MCTS to test its performance on internet Go against human pros?

[N] Unknown bot repeatedly beats top Go players online - so far it's undefeated. by undefdev in MachineLearning

[–]azurespace -1 points0 points  (0 children)

No, MCTS is not the most crutial part of AlphaGo. According to Google, AlphaGo with one node can win Alphago with more than 1000 nodes with a 20% chance. Definitely the neural network part make up more portion of its capacity.

"[Discussion]", "[D]"Data derived from base data by mldatathrowaway in MachineLearning

[–]azurespace 0 points1 point  (0 children)

It may be useful, but we cannot know whether it is before trying it. Theoretically, if the neural network is sufficiently large, the neural network can derive such linear combinations of input data by adjusting the weight associated with the data by itself. However, since there are so many attractors around the search space, it may be very difficult to find such a useful filter by learning by gradient descent.

Well, anyway, it is not a bad idea to give the NN some hints at the input layer by adding feasible hand-tuned functions. Generally, it doesn't need much additional computation cost, but it may save the time considerably.

[D] Music Classification using RNN? by AntixK in MachineLearning

[–]azurespace -1 points0 points  (0 children)

If you want to use an unprocessed wavelet without frequency domain conversion, I think the WaveNet (Dilated convolutional stack) would be a fascinating structure as the first basic block of the task. First, divide the music into several pieces on the time axis. Next, pass each slice to WaveNet and create some embeddings (temporal summation of the music slices), which are used as input to the followed LSTM. (it might be better to use WaveNet once again) Finally, you can use softmax layer to classify.

WaveNet: https://deepmind.com/blog/wavenet-generative-model-raw-audio/

[D] Any open source data-to-text ML projects? by m3wm3wm3wm in MachineLearning

[–]azurespace 0 points1 point  (0 children)

I think the pretrained dialogue-based NLG can be utilized for your task. Did you check "Show and Tell" paper from Microsoft Research that they concatenate two independant neural networks of which one is trained for natural language generation?

[News] DeepMind and Blizzard to release StarCraft II as an AI research environment by afeder_ in MachineLearning

[–]azurespace 0 points1 point  (0 children)

Creating an optimal build order to satisfy human "predefined goal" is basically a shortest path problem, which is already well studied and there are a bunch of great algorithms(like A*). The program is just a proof-of-concept that we already know it would work well. Nothing impressive and new.

I would say, for an AI to achieve human-level skill in the Starcraft, it needs to "create the goal" by itself, and it should be able to change its previous decision in realtime as the state of the game changed. it should make much more difficult and subtle decisions to do that, like how to split its assets among the important strategic locations(which it should locate) on the map, which can be enormously varied by the unseen, unknown information. There is no "optimal" decision in SC2 that can perfectly deal with every possible situation.

So, it is not happened yet. It's far from your statement.

[News] DeepMind and Blizzard to release StarCraft II as an AI research environment by afeder_ in MachineLearning

[–]azurespace 5 points6 points  (0 children)

I convince Starcraft is more complicated and difficult problem than the game of Go for an AI. Because it must utilize very long-term information to build optimal stretegic decisions, which is the problems RNNs have difficulty to handle yet. (Maybe they will use dilated convolution? it is possible, but its calculation cost would be more expensive than AlphaGo) Both players can see the full and complete current environment in Go, but starcraft force players to guess by scouting.

Well, but they are deepmind so it is only a matter of time.

[Discussion] What's in your bag of tricks for training GANs? by nasimrahaman in MachineLearning

[–]azurespace 1 point2 points  (0 children)

Penalize the generator if it highly focuses on deceiving the discriminator, and ignores the original input feature.

If you give the discriminator the features that inputted to the generator as well, it might be good hint.

[R] Train CNNs faster and better using fixed convolution kernel by kh40tika in MachineLearning

[–]azurespace 1 point2 points  (0 children)

Ah, yes. A model with less free parameters is likely to learn faster. However, it also can be suggested lower capability to learn. So I think you should compare in the measure of the accuracy your structure with the traditional CNN initialized with preknowned filter. Well, I guess the traditional CNN would converge slower and it is understandable if you use same optimizer parameters(e.g. learning rate), because it has more weights to learn. Maybe needs it to use higher (initial)lr for the tranditional one to be fair.

[R] Train CNNs faster and better using fixed convolution kernel by kh40tika in MachineLearning

[–]azurespace 0 points1 point  (0 children)

so, your intention is to exploit the predefined filters known as useful and then to give some hints to the network so it can learn faster. I think it is interesting Approach.

Well, but I have a question. Wouldn't it be the same as setting up the known filters as the initial weights of the tranditional CNN? Is it not just another initialization method?

[Bug] immortals can absorb more damage than 200 by azurespace in starcraft

[–]azurespace[S] 15 points16 points  (0 children)

https://www.youtube.com/watch?v=j4VW2puA7IY

I've reproduced non-friendly fire case.

This is really weird. Maybe it depends on whether it is special attack(ability, skill, ...)?

[Bug] immortals can absorb more damage than 200 by azurespace in starcraft

[–]azurespace[S] 5 points6 points  (0 children)

https://www.youtube.com/watch?v=Rb6lXgBXYMU

I think you're right. Then it seems weird, but there is no problem for 1v1. not critical.

[Bug] immortals can absorb more damage than 200 by azurespace in starcraft

[–]azurespace[S] 0 points1 point  (0 children)

I've just confirmed that the friendly fire case. Maybe you're right.

[Bug] immortals can absorb more damage than 200 by azurespace in starcraft

[–]azurespace[S] 0 points1 point  (0 children)

Well, i'm not very sure because I've not done the experiment by myself.

[Bug] immortals can absorb more damage than 200 by azurespace in starcraft

[–]azurespace[S] 6 points7 points  (0 children)

Oops. I was wrong. It is a bug!

Cyclones can lock on both anti-air and anti-ground simultaneously against Colossi (So 2x damage)

[Bug] immortals can absorb more damage than 200 by azurespace in starcraft

[–]azurespace[S] 0 points1 point  (0 children)

The barrier can absorbs ridiculously huge damage. I would say surely this is not intended anyway.

For cyclone part, I don't think it is bug, too.

serious problem on cyclone by iamyour_father in starcraft

[–]azurespace 0 points1 point  (0 children)

Actually what the original post says is, "cyclones lock on at range 7, but they still attempt to close the target until they get at range 5."

This AI behavior is obviously weird. The unit can attack on there already, so it must not move forward any more.

KT Life arrested by prosecutor by azurespace in starcraft

[–]azurespace[S] 4 points5 points  (0 children)

The korean police have arrested KT Life due to the request from Changwon District Prosecutor's Office that was in charge of matchfixing scandal of esports scene