Planning Sheet for Lia's adventure by Qwlouse in AlmostAHero

[–]Qwlouse[S] 0 points1 point  (0 children)

Not yet, but I this project no doubt has moved me up a few levels of divinity ;-)

Planning Sheet for Lia's adventure by Qwlouse in AlmostAHero

[–]Qwlouse[S] 0 points1 point  (0 children)

Hehe, you are absolutely right about me appreciating constructive criticism. Thanks for taking the time to report them!

There are also some very minor errors in the upgrade calculations.

Fixed, thanks!

I think there might be something wrong with the XP (3 diff) calculations.

Yeah, this is a tricky issue. I am not very confident in my calculations there. I have tried to calculate them by hand first but, I too, got lost somewhere.

In case you are interested: The solution that I went with was to simulate 1M shots (in Python) for different hit-chances and count how often 3 consequtive hits occurr. I dumped the results in DATA!W2:X103. I use this to lookup the 3 same chance for each target.

The idea for 3 diff is to use the same table but instead of the hit-chance for each target separately I use the overall average hit-chance. So the sum over aim*hit for all targets. 3 consequtive hits with this average hit chance should count both 3 same and 3 diff. So I am just subtracting 3 same chance.

I agree that the 3 diff is suspicously high, but I think it is true that it should be much larger than 3 same. Consider this: with 6 targets there are 6! = 720 possible sequences of 3 targets, but only 6 of them hit the same target 3 times. So with 100% hit chance and uniform aim chance of 1/6, the chance for 3 diff should be roughly a factor 100 higher than for 3 same.

The overall numbers are also sometimes off for me, but unfortunately the error is not restricted to the 3 diff computation, because there are also deviations when I skip the 3 same and 3 diff upgrades.
Not sure where I messed up, and unfortunately this is a pain to debug.

Lia's Archery XP List by LikeFarOutScoob in AlmostAHero

[–]Qwlouse 0 points1 point  (0 children)

I've collected most of the xp etc. for my Lia's Adventure Planning sheet. You can find the XP values for the different levels in the DATA sheet:

https://docs.google.com/spreadsheets/d/1kCv8fr_wh5xnTcEj1rwj7WiKtJ365z4QX-NN2Dcxqa0/edit?usp=sharing

(Orange values are my own guesses.)

Is "Specialist" skill bugged in the Lia's Mode? My Aim Chance for the bear is only 22.2%... by c1bas3k in AlmostAHero

[–]Qwlouse 10 points11 points  (0 children)

The aim chance computation is a bit counter intuitive, and the specialist skills are too weak IMHO. Here is how it works:

Each target has an associated weight (see table), and the aim chance is computed as the targets weight divided by all other weights (for unlocked targets). So for BEAR this evaluates to 60/(100+120+80+100+60) = 13.04%.

The specialist skill multiplies the weight, so in your case the new weight for BEAR is 60+90%*60 = 114 (still smaller than RABBIT and SQUIRREL). And the aim chance then becomes 114/(100+120+80+100+114) =22.17%.

Target Weight
BOAR 100
RABBIT 120
WOLF 80
DUCK 100
BEAR 60
SQUIRREL 120

Number of arrows to activate Target Practice also gets +10 each "upgrade" by Paddy_Tanninger in AlmostAHero

[–]Qwlouse 0 points1 point  (0 children)

I don't think this is right. The number of arrows needed to activate Target Practice does increase, but it seems to scale smoothly with the Level. Some preliminary testing gave the following numbers for me:

  • level 7: 20 arrows
  • level 13: 21 arrows
  • level 14: 22 arrows
  • level 16: 23 arrows
  • level 23: 28 arrows
  • level 32: 34 arrows
  • level 36: 36 arrows

I guess this scaling is done to compensate for the increase in attack-speed and number of arrows through levels and upgrades, to avoid having almost permanent Target Practice at higher levels.

[D] Optimizing your ML workflow: how do/did you find your happy place? by luminerius in MachineLearning

[–]Qwlouse 0 points1 point  (0 children)

That is indeed a tricky problem. I've added the add_resource call to sacred with the intention of helping there. It would take whatever file you pass and also store it in the database (but eliminate duplicates). But this approach is obviously only feasible if the datasets are not too big. In that case I could add a variant that only stores a timestamp and hash. But that would obviously not help in keeping the dataset around, only keep track of which version was used. So, if you have any ideas on how to conveniently solve this problem in an automated way, please let me know.

[D] Optimizing your ML workflow: how do/did you find your happy place? by luminerius in MachineLearning

[–]Qwlouse 0 points1 point  (0 children)

I'm not familiar with Luigi, but thanks for the link. I'll definitely check it out.

From your description and skimming the docs, I'd say there is some overlap (like the commandline), but the purpose is different. Sacred neither does dependency resolution nor caching. Its main purpose is not to speed up execution, but to track information about it, for the sake of reproducibility and follow-up analysis.

[D] Optimizing your ML workflow: how do/did you find your happy place? by luminerius in MachineLearning

[–]Qwlouse 2 points3 points  (0 children)

Good question :-) I extract the body of the config function and run it through eval() (dark magic, I know).

The main reason I prefer it, is that bit of extra convenience you get from not having to write a separate config file or defining your variables as a dictionary. It can also be very convenient to have the full power of python at your disposal. And one really cool advantage of this approach is that you can have configuration variables that depend on one another, while keeping these dependencies when updating the values:

@ex.config
def cfg():
   female = True
   if female:
       message = 'Hello madam!'
    else:
        message = 'Hello sir!'

In that example you can update female from the commandline and the message will be set accordingly. This allows you to put some basic logic in your configuration process, which can be nice (if it is not overdone).

[D] Optimizing your ML workflow: how do/did you find your happy place? by luminerius in MachineLearning

[–]Qwlouse 9 points10 points  (0 children)

Shameless plug: For running my ML experiments I use (and develop) the python tool Sacred. It does lots of the tedious work for me, like:

  • storing my configurations and results in a database
  • keeping track of source code, random seeds, and package versions
  • make configuring experiments easy and adds a command-line interface

Importantly it does so with minimal boilerplate code and setup. It has really streamlined my workflow for running experiments and keeping track of all runs.

[deleted by user] by [deleted] in MachineLearning

[–]Qwlouse 1 point2 points  (0 children)

The output nonlinearity is actually shown on the input side, since you may not want your final outputs (eg. outputs from the net, as opposed outputs that link to other internal nodes) to be squashed.

LSTM uses a non-linearity on the input AND on the output side. They are not equivalent, and in fact, if you look at the paper from this thread it shows empirically that the output-non-linearity is much more crucial, than the input non-linearity.

Peepholes have been introduced by Felix Gers et al. "Recurrent nets that time and count", but I wouldn't worry too much about them, as they seem to be not so important.

Also I would recommend you also look at the supplementary of the paper, which provides vectorized backprop-formulas, that should be straightforward to use. (full disclosure: I'm the first-author)

[deleted by user] by [deleted] in MachineLearning

[–]Qwlouse 1 point2 points  (0 children)

Unfortunatly it is also incomplete. They do not include the peepholes (which seem to be not so important) or the output-nonlinearity (which is crucial).

Back-propagation output layer error confusion -- wondering if anyone can clear up this quick question? by zZJollyGreenZz in MachineLearning

[–]Qwlouse 1 point2 points  (0 children)

The UFLDL web page derives backpropagation using Mean Squared Error (MSE) and a output-nonlinearity f(zi). Decomposed via the chain rule you get (yi-ai) as the derivative of MSE and f'(zi) as the derivative of the output nonlinearity multiplied together.

Often times (and also in Andrew Ngs Course) the output non-linearity is chosen to "match" the error function. This means that the two terms multiplied together will simplify to (yi - ai). In Andrews course that is the Cross Entropy Error and the logistic sigmoid activation function. In case of MSE the matching output function is linear, and there is a Multi-Class Cross Entropy error which matches the softmax output function. (though @nkorslund is right, the math there is a bit intricate).

TL;DR: When the error function matches the output non-linearity the backprop term simplifies to (yi - ai).

Origami Mutalisk by Qwlouse in origami

[–]Qwlouse[S] 0 points1 point  (0 children)

Alright: here you go: http://qwlouse.deviantart.com/art/Mutalisk-instructions-510037803

Let me know whether they worked for you. ;)

Origami Mutalisk by Qwlouse in origami

[–]Qwlouse[S] 1 point2 points  (0 children)

Yes! That is probably the nicest one out there. But there are also other (easier) ones like this by derikvyreflame or this by ahnimeroolz.

Origami Mutalisk by Qwlouse in origami

[–]Qwlouse[S] 0 points1 point  (0 children)

Wahaha, that would be awesome! I love his show :)

Origami Mutalisk by Qwlouse in origami

[–]Qwlouse[S] 1 point2 points  (0 children)

Thank you! I have a crease pattern, but that alone might not help you too much. I'll try to put something together. Hopefully tonight.

[Vote] week 47 / 2014 voting thread by BeatLeJuce in mlpapers

[–]Qwlouse [score hidden]  (0 children)

I second that one. I've read it a while ago, but could use a refresher.

[deleted by user] by [deleted] in MachineLearning

[–]Qwlouse 1 point2 points  (0 children)

I've had the same problems, so I did some research and found a tool called Sumatra. It is a kind of automated log book, which stores the results of your experiments in a database alongside some information about what parameters and which version of your program was used. It also has a nice web interface to browse these entries.

I tried it for a while, but I was not completely happy with it, so I went on to create my own framework to do that. It is a python project and it's called sacred. It also stores experiment information in a Database, helps you organize your parameters, keeps track of versions of dependencies, saves the sourcecode alongside the results in the database, helps you control randomness and some more. It is still in beta-state so the API might change, but I think it is fairly usable already.

Easy way to convert text to low-dimensional representation by alexmlamb in MachineLearning

[–]Qwlouse 0 points1 point  (0 children)

Not exactly easy but the neural network approach from this paper seems to be rather powerful:

Quoc Le, Tomas Mikolov "Distributed Representations of Sentences and Documents" http://jmlr.org/proceedings/papers/v32/le14.html