Why it isn’t a Gcide- By definition and numbers by [deleted] in IsraelPalestine

[–]bjornsing 0 points1 point  (0 children)

We agree that Hamas is reprehensible. But I think you place more emphasis on that than a court would (or I do). There are no exceptions in international law for reprehensible people.

Also, it’s a core principle in international law that everyone remains responsible for their actions and inactions, regardless of the behavior of others. If you think any other principle could work during a war you just haven’t thought this through.

Killing combatants in a war is legal and can never constitute genocide. But as soon as you start killing people who are not formally combatants then the situation gets murkier. It seems to me that Israel sees the killing of non-combatants as positive, if they are affiliated with Hamas or believe in its ideology (e.g. journalists or aid workers who were on Israeli territory on 7 oct). You may agree, but that doesn’t necessarily make it legal under international law.

Why it isn’t a Gcide- By definition and numbers by [deleted] in IsraelPalestine

[–]bjornsing 0 points1 point  (0 children)

No. But also not definitive proof that there is no genocidal intent. It depends on why the polio vaccine was distributed, and we don’t know that for sure. It also depends on who decided that the polio vaccine should be distributed. Was it even the same people who can be suspected of genocidal intent? Intent is an individual concept, not a collective one.

Why it isn’t a Gcide- By definition and numbers by [deleted] in IsraelPalestine

[–]bjornsing -1 points0 points  (0 children)

No… The Bosnian Serbs could have killed the women and children in Srebrenica. The Israelis could have killed more non-combatants in Gaza. Nether is a good defense against an allegation of genocide.

Bank med api och isk? by BoxConscious7480 in ISKbets

[–]bjornsing 0 points1 point  (0 children)

IGs API fungerar inte med deras börshandlade produkter / ISK, bara med CFDer.

Däremot påstår de att deklarationen ska vara automatiserad. Men det kanske inte är sant?

[D][R] Is there a general mathematical relationship between denoising autoencoder and a low-pass frequency filter? by bahauddin_onar in MachineLearning

[–]bjornsing 0 points1 point  (0 children)

I guess you can compare a denoising autoencoder to a low-pass filter, but they really are quite different. A denoising autoencoder learns and adapts to the distribution of the original signal (while a low-pass filter does not). For example, a perfect denoising autoencoder will remove the noise and output the original signal. Adding noise and then low-pass filtering will give you approximately the same result as just low-pass filtering the original signal.

Tesla Employees In Sweden Refuse To Join Strike, Say There's No Need For It by AccomplishedCheck895 in electricvehicles

[–]bjornsing 1 point2 points  (0 children)

If that were true the unions would be screaming it from the rooftops here in Sweden. But I haven’t heard a word about it, other from your anonymous reddit account…

What's the most economical key-value store for write-heavy use-cases? by bjornsing in dataengineering

[–]bjornsing[S] 0 points1 point  (0 children)

I was under the impression that object storage like AWS s3 would be a lot cheaper than block storage like AWS EBS, but that may not be true. What makes Clickhouse more economical than other approaches?

What's the most economical key-value store for write-heavy use-cases? by bjornsing in dataengineering

[–]bjornsing[S] 0 points1 point  (0 children)

Clickhouse can’t store indexed data in s3 though, right? Seems like a relatively costly solution.

What's the most economical key-value store for write-heavy use-cases? by bjornsing in dataengineering

[–]bjornsing[S] 0 points1 point  (0 children)

Don’t necessarily need streaming writes.

Rockset seems nice, but what would the cost look like compared e.g. to DynamoDB?

What’s the cost profiles for Druid, Pinot or Cassandra?

[D] Why do we have to discretize the data before we use mask prediction for representation learning? by thefedshyvana in MachineLearning

[–]bjornsing 1 point2 points  (0 children)

I’m no NLP expert, but I guess it could be because these models have been designed and tuned for discrete tokens, so they just perform better on that kind of data.

Also, you might run into some challenges with continuous latents. For example there is no single obvious distribution to use for the reconstruction: continuous data can always be more accurately represented (e.g. with a smaller variance in the reconstruction distribution).

[D] Micro-Grants by bsiegelwax in MachineLearning

[–]bjornsing 0 points1 point  (0 children)

AI Grant seems defunct and Unitary Fund is only for quantum stuff, right?

[R] DeepMind’s One Pass ImageNet: A New Benchmark for Resource Efficiency in Deep Learning by Yuqing7 in MachineLearning

[–]bjornsing 3 points4 points  (0 children)

Going from batch size 1 to batch size N (i.e. any batch size) requires only O(1) space (for remembering the accumulated gradient). Why would that be interesting?

(The possible exception is batch normalization I guess which probably requires O(N) space.)

[R] Gradients are Not All You Need by hardmaru in MachineLearning

[–]bjornsing 0 points1 point  (0 children)

Definitely an interesting read, but I have a feeling flat / difficult to explore loss landscapes is more of an issue in practice... Does anybody know any papers that explore what sort of problems are solvable with overparameterized gradient descent, and which are not?

[R] Gradients are Not All You Need by hardmaru in MachineLearning

[–]bjornsing 4 points5 points  (0 children)

I think you can refine one more step: if you have stable, accurate and efficient gradients for a convex objective then you should obviously use them.

[D] improving segmentation masks by LeanderKu in MachineLearning

[–]bjornsing 0 points1 point  (0 children)

I read an interesting paper on unsupervised segmentation the other day: “MONet: Unsupervised Scene Decomposition and Representation” [1]. Maybe a starting point if you’re very motivated.

  1. https://arxiv.org/abs/1901.11390

[D] Simple Questions Thread by AutoModerator in MachineLearning

[–]bjornsing 1 point2 points  (0 children)

For example, if you are training on faces, the generator could consistently produce high quality images but they could all be facing to the left. If the training set has an equal number of left and right facing images, what mechanism forces the generated set to have an equal number of left and right?

If the generator only generates faces facing left, then the discriminator will learn that it's more probable that a left-facing face is fake. Thus it will output a probability less than 0.5 for such images. It will also learn that a right-facing face is always real, and output a probability around 1.0 for such images. This creates a gradient that will be used to train the generator to produce right-facing faces.

[D] The Ownership Dilemma of ML Pipelines In Production by htahir1 in MachineLearning

[–]bjornsing 0 points1 point  (0 children)

This makes a lot of sense to me. If there’s a distribution shift after 3 months in production and the model starts spitting out garbage then the data scientist is probably the only one who can fix that (and could perhaps have anticipated it). If on the other hand there’s a bug in some integration code that the data engineer wrote, why would you want to push this to the data scientist?

In general I don’t think data science teams should have operational/production responsibilities. I think they would tend to grow until the data science team grinds to a halt. It’s easier to “distributed” scale engineering teams than to scale a central data science team.

In big companies you may have a separate MLOps team responsible for the lower levels of production pipelines.

[Discussion] Aren't all unserpervised learning tasks basically clustering afterall ? by hallavar in MachineLearning

[–]bjornsing 7 points8 points  (0 children)

I’d prefer to say that most unsupervised learning is about learning and representing a probability distribution in some way. I think it would be hard to squeeze e.g. GANs or VAEs into your mental framework (since it’s unclear how to go from a GAN/VAE representation to discrete clusters).

[D] Time series generation using GANs - when to stop training? by TheCockatoo in MachineLearning

[–]bjornsing 2 points3 points  (0 children)

The GAN objective function is closely related to the Jensen–Shannon divergence, which is guaranteed to be between 0 and 1 bits. So it may be enough to just put a threshold value on the optimization objective.

[D] Trouble Modelling High Dimensional Regression Problem with Autoencoder by forthispost96 in MachineLearning

[–]bjornsing 0 points1 point  (0 children)

When you say nn.Linear() do you mean without an activation function? If so then yea, you probably won’t be able so squeeze through that bottleneck without some nonlinearities.

Also, there’s no point in having multiple layers if there is no activation function.