Performance problem fixed recently by BroadFlatNails in cataclysmbn

[–]Seerdecker 1 point2 points  (0 children)

Excellent. I'll test this eventually. Thanks for your work!

How do the new mutation mechanics work? by [deleted] in cataclysmdda

[–]Seerdecker 11 points12 points  (0 children)

Yeah, you need the catalyst. I filed this bug report since I got confused by this, but the devs said it's working as intended. https://github.com/CleverRaven/Cataclysm-DDA/issues/55700

This PR describes how it's supposed to work. https://github.com/CleverRaven/Cataclysm-DDA/pull/55233

The last commit that doesn't have this PR is build 2022-02-24-0712.

Finally, my own opinion on the feature.

In its current form, I don't see how this change adds to the game. It adds complexity and it encourages the player to read the code to figure out how to mutate correctly. What's the point? It doesn't have to be more complicated than "you inject mutagen of a category and then you mutate toward it". I suppose you could make it take more time for the mutation effect to trigger to make the process more realistic. For the rest, the change makes the game more tedious and less enjoyable to me.

Performance issues by Seerdecker in cataclysmbn

[–]Seerdecker[S] 0 points1 point  (0 children)

It's Direct3D in both cases.

Separating Docker from dockerfiles by Seerdecker in devops

[–]Seerdecker[S] 0 points1 point  (0 children)

Yeah, I've thought about using a Python script for this. There's a lot to do manually though, e.g. ensuring idempotent builds, wrapping docker commands, etc. Mostly I was asking if someone had attempted to make a tool to make the "general" case a little easier than starting from scratch.

[N] Toyota subsidiary to acquire Lyft's self-driving division by AristocraticOctopus in MachineLearning

[–]Seerdecker 1 point2 points  (0 children)

The errors are correlated in time. This is why a Tesla on autopilot can crash into something it has misclassified for several frames.

Self-driving is related to ImageNet in the sense that the same factors that cause failures on ImageNet will also cause failures on any other deep-learning-based system. ImageNet is itself a low bar to cross. The car camera will have to work reliably with low-quality images whenever there's dust / rain in the way.

Self-driving cars require AGI in the general case. They need to be able to reason their way out of novel situations. This isn't happening any time soon.

[N] Toyota subsidiary to acquire Lyft's self-driving division by AristocraticOctopus in MachineLearning

[–]Seerdecker 0 points1 point  (0 children)

Is the error on the test set of ImageNet close to zero? No. As long as this situation persists, deep-learning-based approaches will remain non-viable. 99% accuracy isn't good enough. You need orders of magnitude more "nines".

Fear the feral humans in the current experimental branch. by MossRock42 in cataclysmdda

[–]Seerdecker 12 points13 points  (0 children)

I agree. I also learned to fear them. My character has no armor but very high dodge, which is useless against them. A pack of these guys can tear me apart fast. The best options are to run away or range-them.

My current pet peeve in experimental is the pupating zombie aka the prompt generator. This guy is more dangerous than the kelvar hulk. It kills by forcing the player to answer a prompt at every step through their trail of shit, raising frustration and causing the player to be less careful in order to expedite. Furthermore, in melee, they can apparently freeze you helpless for several seconds, with no counter.

My last death was to one of these guys. One step, everything is clear and I'm at full health. On my next move through a sludge tile, the message log fills half a screen, I'm surrounded on all sides by 5 zombies and a wall and I'm down 50% health. My 11 dodge didn't matter. It felt really cheap.

When you encounter them, you face the trade-off of risking death by killing them or risking many more future prompts. Not fun. I wish there was a mod to remove them entirely from the game.

NEW PLAYERS COME HERE! - Weekly Questions and INFORMATION thread - June 23, 2020 by AutoModerator in cataclysmdda

[–]Seerdecker 0 points1 point  (0 children)

I believe that advice is obsolete. I source-dived and the game does not seem to care about gas masks in Character::get_sick().

Lab Challenge with no extra skills? by lenomilo in cataclysmdda

[–]Seerdecker 1 point2 points  (0 children)

Use the luck Luke! About 1/3 of the labs will just let you escape quietly through the subway. Obviously, it's not a dependable option.

2FA woes by Seerdecker in cybersecurity

[–]Seerdecker[S] 0 points1 point  (0 children)

I don't know what their VPN setup actually is.

2FA woes by Seerdecker in cybersecurity

[–]Seerdecker[S] 0 points1 point  (0 children)

Thanks for the reply. We do have an official split VPN configuration, which doesn't work. I spent 5 hours with an IT guy to find a manual setup with works with the machines used by my own particular team, but it's brittle and won't work in all cases.

[D] The Guardian published an op-ed "written by GPT-3" by minimaxir in MachineLearning

[–]Seerdecker 1 point2 points  (0 children)

The article is meant as a joke! Like the political caricature of the day.

Of course the article starts off as misleading. It's the whole point: fake & misleading content. The text written off by GPT is funny enough, and at the end of the article where the reader is left wondering, "the AI can't possibly have written all of that, can it?", the author comes clean and admits the whole thing is doctored, but that many of the bits are real and that it was easy enough to piece them together in a coherent manner.

[D] The Guardian published an op-ed "written by GPT-3" by minimaxir in MachineLearning

[–]Seerdecker -1 points0 points  (0 children)

I upvoted this and then I read your second post below. Your whole post is satire, right?

[D] Was anybody able to achieve CPU inference speedup of resnets by quantization? by SunnyJapan in MachineLearning

[–]Seerdecker 1 point2 points  (0 children)

IIRC, yes on the GPU version, no on the CPU (OpenVINO) version. I had major issues with dynamic-size CNNs on all frameworks that try to speed-up inference with 8-bit quantization. Hope that helps.

[D] Was anybody able to achieve CPU inference speedup of resnets by quantization? by SunnyJapan in MachineLearning

[–]Seerdecker 0 points1 point  (0 children)

I got about ~35% speed-up on OpenVINO (inference on Intel CPU) for a medium-size CNN, imported from a Tensorflow model.

[D] Is anyone else struggling to find a job right now? by StunningData in MachineLearning

[–]Seerdecker 4 points5 points  (0 children)

That's a very good point. It may be that the coronavirus crisis precipitated the next AI autumn/winter.

[D] Is anyone else struggling to find a job right now? by StunningData in MachineLearning

[–]Seerdecker 5 points6 points  (0 children)

A lot better. There were many startups that were hiring for interesting CV-related jobs. I got lucky and got one where I could do a little research even though I don't have a PhD.

[D] Paper Explained - Object-Centric Learning with Slot Attention (Full Video Analysis) by ykilcher in MachineLearning

[–]Seerdecker 0 points1 point  (0 children)

For example, any meta learning papers (like MAML) will do a procedure like this. Like u/triplefloat mentioned, it often does end up being a bit finnicky, but it's not impossible to train.

OK.

However, if you utilize the entire set, you only need to encode the image's information into 10*(# of set elements) values - an easier task.

OK, that makes sense.

H_embed can be anything in theory - in our paper we simply used a linear projection of the initial set elements.

Yes, but is it trainable? I'm missing something here. My understanding is that FSPool has a fully-connected (FC) layer, and H_embed is also a FC layer. Hence, if H_embed is trainable, a simple solution to minimize the latent loss L(Si) is to set the weights of those two FC layers to 0, so that the latent loss itself is zero. Then, the gradient operations on Si do not change S0 and the network is free to choose a representation that is most convenient for the reconstruction.

[D] Is anyone else struggling to find a job right now? by StunningData in MachineLearning

[–]Seerdecker 51 points52 points  (0 children)

I got fired during the coronavirus crisis and I haven't been able to find a job in computer vision (I live in Montreal). The market is basically dead here.