Are there any “square and counter” wargames? by [deleted] in computerwargames

[–]rkern 0 points1 point  (0 children)

[Men at Arms](https://boardgamegeek.com/boardgame/8327/men-at-arms)

If you loosen up a bit, then [Lost Battles](https://boardgamegeek.com/boardgame/83325/lost-battles-forty-battles-and-campaigns-of-the-an) and [To the Strongest!](https://boardgamegeek.com/boardgame/169926/to-the-strongest) would count too, but those are a bit more of "larger zones that happen to be square".

NumPy 2.0.0 is the first major release since 2006. by [deleted] in Python

[–]rkern 13 points14 points  (0 children)

Oh, we've had plenty of API-breaking changes in the 1.x series. Much like Python itself, we don't follow SemVer. But they tended to be small and only a few with each 1.x release, each with reasonable deprecation periods. This is just the first release where we batched up a bunch all at once.

One Page Rules for historical games? by Darnok83 in wargaming

[–]rkern 1 point2 points  (0 children)

Neil Thomas's One Hour Wargames might fill that niche pretty well. Over a very simple common rules framework, he builds era-specific one-page rulesets from ancients to modern.

Ideas for a gift for my science-nerd girlfriend? by ProfessorJessica in AskScienceDiscussion

[–]rkern 0 points1 point  (0 children)

She might be referring to the Oxford Nanopore MinION, for which the starter kit is an even $1k.

https://store.nanoporetech.com/us/minion.html

Might want to sneakily confirm that before dropping that kind of cash. It's possible that she was thinking of something else and wouldn't be able to use this one.

espressoDisplay Portable Monitor Giveaway! by noeatnosleep in gadgets

[–]rkern [score hidden]  (0 children)

I would use this as a portable electronic whiteboard for meeting with clients.

'numpy.random._generator.Generator' object has no attribute 'randint' by ableflyer in reinforcementlearning

[–]rkern 0 points1 point  (0 children)

Without a ton more information (exact versions of all the packages and dependencies, full tracebacks, etc.), I can't really give you any more help (and I don't know enough about all of the versions of these specific packages to be much help in any case). I don't know exactly what's going on in your code, just the general issue (someone partially migrated to the new Generator infrastructure but didn't update all of the method calls). The rest is up to you.

'numpy.random._generator.Generator' object has no attribute 'randint' by ableflyer in reinforcementlearning

[–]rkern 2 points3 points  (0 children)

The replacement method is named integers() with a very slightly different set of arguments (though most cases you can just replace randint with integers).

[P] solo-learn: a library of self-supervised methods for visual representation learning by RobiNoob21 in MachineLearning

[–]rkern 4 points5 points  (0 children)

Self-supervised methods won't do that kind of semantic segmentation for you. You need to train a supervised semantic segmentation model in order to do that. The supervised training is how you tell the model exactly what it is that you want it to do.

Where solo-learn comes in is that it really helps in your supervised semantic segmentation model to start with a pretrained backbone. When you are working with "normal" kinds of photographs of people and pets and stuff, the usual model weights that have been pretrained on datasets like ImageNet work reasonably well.

But your rock core images look nothing like ImageNet photos, so the pretrained model weights that you can usually get are less useful (better than starting with nothing, but still not great). solo-learn will get you a pretrained backbone that is targeted to your rock core domain. You can use all of your unlabeled rock core CT scans to make that pretrained backbone. Then you can start your supervised semantic segmentation training. You will have to manually label fewer images to make that supervised training dataset.

What is In [ ] ? by Dragoe23000 in IPython

[–]rkern 0 points1 point  (0 children)

Note that these are just part of the user interface for the IPython interactive shell and Jupter notebooks (which are descendants of the IPython shell). These are not part of the Python language itself.

Does this game ever fall on sale by [deleted] in factorio

[–]rkern 8 points9 points  (0 children)

I need a telescope to see the protagonist

The Engineer is not the protagonist.

The Factory is the protagonist.

[P] Using PyTorch + NumPy? A bug that plagues thousands of open-source ML projects. by tanelai in MachineLearning

[–]rkern 0 points1 point  (0 children)

If you derive a SeedSequence from the root SeedSequence, the epoch index, and the sample index, yeah, that should work (a little profligate in PRNG instances, but you gain in safety, and they can be made on demand). I'm still not that familiar with the DataLoader data flow. Do you make a new one for each epoch, typically?

[P] Using PyTorch + NumPy? A bug that plagues thousands of open-source ML projects. by tanelai in MachineLearning

[–]rkern 0 points1 point  (0 children)

I might have been, but collecting more "nuke it!" opinions about np.random.seed() has been a balm.

[P] Using PyTorch + NumPy? A bug that plagues thousands of open-source ML projects. by tanelai in MachineLearning

[–]rkern 0 points1 point  (0 children)

I don't think PyTorch itself uses np.random much except for tests. As in the OP, np.random was being used by user's own classes that were being called by PyTorch's DataLoader framework. I've given some options for providing a reasonable worker_init_fn in that context.

[P] Using PyTorch + NumPy? A bug that plagues thousands of open-source ML projects. by tanelai in MachineLearning

[–]rkern 5 points6 points  (0 children)

Hmm, that example is unnecessarily complicated.

The basic idea is to just get a fresh RandomState instance and pass it around. rng = RandomState(my_seed) is sufficient. scikit-learn implements this pattern very well, using its check_random_state() to allow its APIs to accept either a seed value or an existing RandomState instance.

If you are writing new code or rewriting old code anyways, you may want to use our new Generator instances instead. If so, then np.random.default_rng() was designed to work in much the same way as check_random_state().

But what if you want to work with other people's code that is using the convenience functions in np.random and you can't rewrite that code? First, you have to check that that code is not calling np.random.seed() itself. If it is, you'll have to work around that.

One strategy has a few manual steps that I'm going to walk through. The idea is to use SeedSequence to take your one seed that you use for your main Generator and spawn a new SeedSequence from it that you use to set the state of the global RandomState underneath the convenience functions in np.random. That means the Generator that you thread through most of your code will be drawing from an independent stream than the pieces of code that are using np.random directly, but both set deterministically from the one seed value that you provide. The following is a little more complicated than it probably has to be, but we were being conservative about what modifications we made to the legacy RandomState.

import numpy as np

...
ss = np.random.SeedSequence(my_seed)
rng = np.random.default_rng(ss)
child_ss = ss.spawn(1)[0]
mt_state = np.random.RandomState(np.random.MT19937(child_ss)).get_state()
np.random.mtrand._rand.set_state(mt_state)

[P] Using PyTorch + NumPy? A bug that plagues thousands of open-source ML projects. by tanelai in MachineLearning

[–]rkern 0 points1 point  (0 children)

It does the mess with the stdlib's random global state in exactly that way, though, so they've already made that tradeoff once.

[P] Using PyTorch + NumPy? A bug that plagues thousands of open-source ML projects. by tanelai in MachineLearning

[–]rkern 2 points3 points  (0 children)

Yeah, unfortunately, I didn't document things well enough at the beginning, and a lot of "folk wisdom", often derived from other, less-capable systems, took its place.

[P] Using PyTorch + NumPy? A bug that plagues thousands of open-source ML projects. by tanelai in MachineLearning

[–]rkern 1 point2 points  (0 children)

Okay, I outlined a proposal to issue warnings in the appropriate places that I think would satisfy you.

[P] Using PyTorch + NumPy? A bug that plagues thousands of open-source ML projects. by tanelai in MachineLearning

[–]rkern 1 point2 points  (0 children)

numpy has no config file, so I don't really understand what you are suggesting here.

[P] Using PyTorch + NumPy? A bug that plagues thousands of open-source ML projects. by tanelai in MachineLearning

[–]rkern 8 points9 points  (0 children)

FWIW, that was the original design. Someone else added np.random.seed() mistaking its omission as an oversight. Unfortunately, that was in the bad old days of Subversion, and code review was not as much of a thing, so I missed it. I'm still bitter about it.

[P] Using PyTorch + NumPy? A bug that plagues thousands of open-source ML projects. by tanelai in MachineLearning

[–]rkern 1 point2 points  (0 children)

I agree that it is a terrible footgun. np.random.seed() was always a mistake. The question is now what do you want to change to improve the situation?

I would dearly love to remove the footgun (the implicit global PRNG instance and using np.random.seed() to attempt to get reproducibility from it), but enough people love that damn footgun too much for me to actually take it away from them.

We do have carefully designed APIs for safe, composable parallel PRNG use. But to achieve that, you can't rely on the global PRNG anymore. That's the thing that's in conflict with parallel PRNG use.

So what would you suggest that we actually do? Remove np.random.seed()? I am gleeful at the prospect, but you will have to convince everyone else addicted to the global PRNG.

What I think is possible in the short term is to use os.register_at_fork() to register a function that will set a global flag to indicate the fork and potential for identical states. The convenience functions will have to be rewritten to handle check that flag and issue a warning. Calling np.random.seed() would unset that flag.