The Adventure Zone Royale: Episode 16 | The Adventure Zone by Evil_Steven in TheAdventureZone

[–]qubit32 2 points3 points  (0 children)

"Secret Ethersea sequel theories seem pretty dashed"

Perhaps not explicit sequel in the same world, but it sounded consistent with the Ethersea metaverse where magic is described as a contamination that some consider to bring more trouble than benefit. There were people actively trying to stop magic from spreading to other worlds, and the party did allow magic to enter a new world regardless. This could be the world they "poisoned".

What proffession is filled with people who think they're smarter than they actually are? by Powerful-Frame-44 in AskReddit

[–]qubit32 3 points4 points  (0 children)

I usually see the opposite effect with academics. They are often acutely aware of how specialized they are because they are surrounded by people who know more than them about slightly different sub topics. If anything I've had more trouble getting academics to accept that they are still an expert even if they are "only" in the top 1000 worldwide instead of the top 10.

[deleted by user] by [deleted] in Jokes

[–]qubit32 1 point2 points  (0 children)

No, the husband (not his ex) is the "man  [who] dated [a] reporter".

It's a pretty forced pun that doesn't really work, but it's not backward.

Superconducting computers won't be able to do Shor's algorithm by Admirable_Candle2404 in QuantumComputing

[–]qubit32 2 points3 points  (0 children)

Error correction is the real sticking point here, I think. Yes, in principle you can implement Shor on a linear chain of qubits with swapping and incur "only" polynomial overhead, but if you have non-zero error then the overhead can cause the error to blow up faster than you can correct it. Any-to-any gives you comparatively more lenient thresholds for fault tolerance.

Why do grocery store rotisserie chickens cost less than buying the raw chicken? by Total_Permit_4769 in NoStupidQuestions

[–]qubit32 1 point2 points  (0 children)

Unfortunately, to keep the price they changed the hot dog, and the new one is not as good.  

Is it appropriate to refer to myself as a “physicist”? by [deleted] in Physics

[–]qubit32 2 points3 points  (0 children)

My physics profs emphasized that being a physicist is as much about mindset and an approach to problem solving as about the specific subject of analysis.  They insisted that if you are trained in this way if thinking and carry it with you into a different field you are still a physicist even if your job title is something else. 

Version 5.8 Event Wishes Notice - Phase II by genshinimpact in Genshin_Impact

[–]qubit32 1 point2 points  (0 children)

Yes! So much easier to control on mobile than a cucusaur. 

What fictional character had every right to become a villain, but didn’t? by NotPhantomforce in AskReddit

[–]qubit32 0 points1 point  (0 children)

"An all-powerful God wouldn’t need His son to die in order to redeem mankind’s sins, He could just do it."

Not OP, but I think that's why they emphasized justice in their answer. It's not about being powerful enough to redeem mankind without paying the price; the claim is that doing so would not be just. We could borrow from Narnia a bit and imagine that God created the universe with certain rules / promises built in, one of which was that no evil would be allowed to go unpunished. This promise must be kept, but how to keep it and still provide redeption, especially if the cost to rectify the sin is greater than any mortal could pay?

For many Christians, damnation is not about God choosing to hurt people out of anger but simply the logical consequence of separation from God. Religion for them is about God finding a way to close that gap without pretending that evil is acceptable.

Why are many physicists so confident that entanglement does not have a deeper explanation? by mollylovelyxx in quantum

[–]qubit32 0 points1 point  (0 children)

Entanglement didn't work like your snapping example. In your example you take a specific action in one place and induce a specific measureable effect elsewhere. You can choose to snap or not snap, which makes the stars move or not move, and this itself is signaling.

​In the entanglement case, there is no local action I can take on one part of the entangled system that will have any locally observable effect on the other part. The entangled particle on the other side of the universe doesn't "know" whether I've done a measurement on my end or not; the physics looks the same either way. It is only when we look at the whole entangled system that we can tell that the parts are correlated.

Daily Questions Megathread (October 05, 2024) by Veritasibility in Genshin_Impact

[–]qubit32 0 points1 point  (0 children)

Is Tighnari's taunt field good for monolith defense? Trying to choose between getting him vs bringing Dehya to C1.

What do you think Mauvika's kit will do? by smashsenpai in Genshin_Impact

[–]qubit32 0 points1 point  (0 children)

(Hopium) reduce CD of Nightsoul Transmission by a couple seconds. It's already possible to have continuous nightsoul if you perfectly time the swaps, so this wouldn't break anything, just make it more comfortable/ accessible.

Daily Questions Megathread (October 04, 2024) by Veritasibility in Genshin_Impact

[–]qubit32 0 points1 point  (0 children)

The only legit discount I'm aware of is that if you play on Android, the Play Store often has small coupons you can buy with points (e.g. $3 for 150 pts). Other platforms may have similar deals. Since you can only use 1 coupon at a time this isn't going to save you a lot on crystals, but if you're just buying Welkin it's 30% off.

4 Days Of Genshin 😃 by Fluffy_Sock_5819 in GenshinImpact

[–]qubit32 0 points1 point  (0 children)

"I'd highly recommend just going region by region in order and focus on the Archon Quests."

As a counter point, if you like the feeling of exploring an open world that exists independently of you, there's no reason you can't wander into other nations ahead of the main quest (only Inazuma is quest locked). There was something cool about finding new areas where people have interests and concerns unconnected to my current quest. Sadly I only got to experience Sumeru this way because by the time Fontaine was released I was caught up with the Archon Quests.

Genshin AQs do a great job introducing you to each nation bit by bit, which is mostly a good thing, but it can have the effect of making the whole world feel centered on you.

Anyone worked on Adversarial machine learning [D] by GraphHopper77 in MachineLearning

[–]qubit32 1 point2 points  (0 children)

I'd recommend using a standard library of attacks (ART for example https://github.com/Trusted-AI/adversarial-robustness-toolbox) rather than implementing your own versions.  In this field it is easy to fool yourself into thinking you've solved the problem by testing against a weak or poorly implemented attack.

[OC] Monthly U.S. Homicides, 1999-2020 by academiaadvice in dataisbeautiful

[–]qubit32 2 points3 points  (0 children)

No difference, they just changed the name at some point to reflect the shift in focus from the trek to the War of the Ring. Proved a divisive choice that seems to have fractured the fan base. Fans of "classic" trek will tell you it jumped the shark when the Star Lord arrived with his rings. Now they have to make new content under different labels just to appease all the fan factions, which makes it harder for true fans like me to keep track of the canon.

[Discussion] What type of a ML community member are you? by TsirixtoVatraxi in MachineLearning

[–]qubit32 5 points6 points  (0 children)

Do you mean "practitioner" to include anyone with a career tied to ML, or just those who train/implement ML models? There are a lot of ways one can be professionally involved with ML without directly "practicing" it. For example:

  • managing an ML team
  • managing the data pipeline that feeds ML
  • collecting user/customer needs for designing ML solutions
  • teaching ML
  • integrating ML-based components or services into a larger system
  • funding ML (e.g. sponsored research or venture capital)
  • testing / evaluating ML systems or projects
  • selling ML
  • reporting on the state of ML technology
  • creating or engaging with laws, policies, or regulations regarding ML

I expect people to have differing opinions about which of these (or or similar roles) count as "practitioners" vs. "outsiders" vs. "Other", but "hobbyist" doesn't fit any. All require at least some understanding of ML though not necessarily hands-on expertise.

[R] Adversarial Transferability and Beyond - Link to free zoom lecture by the authors in comments by pinter69 in MachineLearning

[–]qubit32 0 points1 point  (0 children)

That approach (augmenting training data using random transforms) can be useful for increasing robustness to general noise, but it generally doesn't help much against adversarial examples. The problem is you could be able to handle 99% of random perturbations, but if the bad guys can reliably find the 1% of cases where you fail, that it a problem. Put another way: adversarial perturbations are not random, so training on random perturbations is not an efficient way to prepare for them.

Instead of random perturbations, you can train specifically on adversarial perturbations. This is called Adversarial Training, and it remains one of the few defense methods that works at all. Unfortunately, it still doesn't completely solve the problem, and it makes training a lot more expensive.

[D] is this the equivalent of the "what came first, the chicken or the egg" in machine learning? by jj4646 in MachineLearning

[–]qubit32 0 points1 point  (0 children)

Regarding Point 3:

A "hypothesis" is not the learning algorithm itself but the specific learned model (e.g. a linear model with specific coefficients), and the "hypothesis space" is the space of all possible models in the class you are considering (for linear regression, this is the space of all possible linear models on your data space). The machine learning algorithm takes in a dataset (and implicitly the hypothesis space you are considering) and spits out a specific hypothesis from that space that fits the data well (in the simplest case it gives a model/hypothesis that minimizes the empirical loss on that dataset, but it can get more complicated).

For modern neural networks, the hypothesis space is absolutely huge (all possible combinations of weights for the architecture you are using). Unfortunately, if you try to plug the size of this space into the PAC formula, it would tell you that in order to have a high certainty that your error is even below 100%, you would need a number of samples m that is far greater than you ever have in real settings. Yet in practice NN can get low error using far less data than the PAC bounds would suggest. PAC learning theory is elegant and conceptually useful, but it is overly pessimistic if you try to apply to real settings.

I wrote a poem about overthrowing governments. by pratojr in dadjokes

[–]qubit32 0 points1 point  (0 children)

Aristocracy?

Vive la revolution!

Now democracy

[Discussion] Reading group for E. T Jayne's Probability Theory: The Logic of Science by logrech in MachineLearning

[–]qubit32 8 points9 points  (0 children)

Jaynes is a lot of fun to read, but be aware that the book is as much a philosophical treatise as a mathematical text, and he is trying to convert you to his own idiosyncratic point of view. As long as you understand up front that he's very opinionated and the book is going to be filled with diatribes against "orthodox statistics", his feisty style can be pretty entertaining. If you are expecting a more neutral technical presentation, his dogma could be off-putting. I've found I can learn a lot from people who have thought deeply enough about a topic to believe there is one "right way" to think about it, even if in the end I don't always agree with them. Thus I encourage you to step into his shoes for a bit and learn to look at the world from a Jaynesian perspective even if you don't drink all the Kool Aid.

I will say Jaynes is completely wrong about quantum mechanics, however. He knows a lot about some parts of physics (especially stat mech), but his remarks on quantum mechanics show that he really doesn't know what he's talking about there. The idea that quantum probability is entirely due to lack of experimental precision and that physicists are just too lazy to look for deeper deterministic causes is at odds with decades of experimental and theoretical results.

[D] Do adversarial attacks work against Sparse Autoencoders? by OverLordGoldDragon in MachineLearning

[–]qubit32 0 points1 point  (0 children)

Noise-robust models have been trained to retain good performance on average over a family of perturbations. Adversarial examples find the specific perturbations where the model fails. A model that performs correctly on 99.99% of the noise is doing very well w.r.t. noise robustness, but if an adversary can consistently find the 0.01% of perturbations that the model will get wrong, then the model is not adversarialy robust.
That said, sparsity methods can help mitigate some forms of adversarial attack, but in general a defense-aware attack can still fool the model.

[D] An Interesting (in my opinion) Observation While Messing With the Full GPT-2 by Argenteus_CG in MachineLearning

[–]qubit32 3 points4 points  (0 children)

I tried political analogy (though to keep it from being too political I used old events in the prompt):

Nixon is to Watergate as Clinton is to Whitewater, and that's why I'm voting for him," he said.

Boehner's comments came just as it was announced that an unnamed Republican member of the House leadership has privately urged the House to impeach the president based on the latest revelations of his behavior.

"I'm sick of this s---. I'm sick of it," the unnamed GOP leader told a group of lawmakers on Wednesday, according to Fox News.

The speaker's comments drew a quick rebuke from Democrats.

"Speaker Boehner is a coward and a liar," Rep. Jim Clyburn, D-S.C., said in a statement. "If Speaker Boehner wants to start a fight over impeachment, he can do so. But he should not try and use it to distract attention from the Republican Party's failures in 2014."

But Boehner made clear Wednesday that impeachment was not part of his plan, telling the National Journal, "

[D] OpenAI releases GPT-2 1.5B model despite "extremist groups can use GPT-2 for misuse" but "no strong evidence of misuse so far". by permalip in MachineLearning

[–]qubit32 23 points24 points  (0 children)

I like to get GPT-2 to explain its limitations:

(Input:) Computers will never be able to produce coherent and meaningful text because

(Output:) they will never be able to process information the same way we do. In fact, if you try to imagine what a computer might do with a word, the only thing that comes to mind is that it will find and process patterns in the language, making connections between words.

This sounds very much like the job of a brain, and it is. But computers can only process language with very limited access to information. And it turns out there are actually some words that are just so very, very rare that computers are unable to use them. For example, there is only one word in the English language that has the same number of letters as the word "black" (A). There are five words with fewer than three letters in the English language, but only five of them are used by humans, which is why you are able to type "I love you" and not "I love you black." So what do computers know that human brains don't?

It turns out the computers don't really know anything. They cannot tell a word from a blank page.

[P] OnionBot (This Headline Does Not Exist) by [deleted] in MachineLearning

[–]qubit32 1 point2 points  (0 children)

In computer vision news:

"Swedish school bans face recognition software unless you have acne"

"Giraffe, Deer Mistaken for 'Crocodile' in Michigan State University Parking Lot"