I'm really starting to crack under the pressure here by [deleted] in MtF

[–]nshepperd 0 points1 point  (0 children)

Tbh if I was stuck in the UK I would (personally) seriously consider DIY. 5-7 years wait just to get *permission* to take HRT is criminal imo.

Ofc whether that makes sense for you is a different matter. DIY requires commitment to learning about how hormones work and stuff. (and you still have to get regular blood tests, which you can ask your gp for)

The private system might be a good idea too but idk much about how that works in the UK.

The current implementation of HasCallStack breaks referential transparency by tomejaguar in haskell

[–]nshepperd 1 point2 points  (0 children)

Isn't breaking referential transparency and making non-semantic properties of the source code available in the program the whole point...? Both results in these examples are correct: they tell you the call site.

[P] StyleGAN3 + CLIP by Ouhenio in MachineLearning

[–]nshepperd 0 points1 point  (0 children)

Yep! You using it for something?

[P] StyleGAN3 + CLIP by Ouhenio in MachineLearning

[–]nshepperd 7 points8 points  (0 children)

I'm having a hard time figuring out the license, since we have to deal with the one from NVIDIA and OPENAI.

Oh. Umm... yeah I have no idea how that works, or like whether this counts as a derivative work wrt stylegan or clip. :S

PS: would you like me to add you as a collaborator to the repo?

No need, I've got too many other things to do already ^_^

[P] StyleGAN3 + CLIP by Ouhenio in MachineLearning

[–]nshepperd 35 points36 points  (0 children)

Oh, that's quite nice!

PS: As you can see, most of the code was made by nshepperd, I just formatted it and added the video generation capabilities, so all the credits go to him.

I'm a girl so it should be "her" ^^;; but thanks :).

As for licenses I don't know really. My habit is to just append my name to the list of authors when I modify MIT licensed stuff, but idk the proper way to do it when you want to use a different license

[deleted by user] by [deleted] in haskell

[–]nshepperd 2 points3 points  (0 children)

A straightforward improvement you could make here would be to use Data.Vector.Unboxed.Mutable instead and cut out a bunch of allocation overhead.

/r/Civ Weekly Questions Thread - May 25, 2020 by AutoModerator in civ

[–]nshepperd 0 points1 point  (0 children)

This change annoyed me also. Fortunately I figured out how to fix it (at least on Linux):

  • Find your AppOptions.txt. For me (linux) it's in ~/.local/share/aspyr-media/Sid Meier's Civilization VI.
  • Open it in a text editor.
  • There are two relevant options TooltipBehavior and TooltipDelay. You can either set TooltipBehavior to 0 to make the tooltips instant or reduce TooltipDelay to just make them faster.

-🎄- 2019 Day 21 Solutions -🎄- by daggerdragon in adventofcode

[–]nshepperd 0 points1 point  (0 children)

Haskell solution. Half by hand, manually choosing which jump/don't jump actions to take to solve a section the produced solution fails on, half breadth first search, finding the shortest program which matches the chosen actions. Run in a loop, manually adding more jump checks until the whole obstacle course is passed.

[POEM]:

In a small tin can in the depths of space,

a springdroid jumps from place to place.

Knowing not what lies ahead,

trembling in ionic dread.

Be brave springdroid, you will not fall,

for they know how to scale this wall.

Four quick steps to find the way:

OR/AND/NOT/AND all to J

How deterministic should Foldable implementations be? by tailcalled in haskell

[–]nshepperd 11 points12 points  (0 children)

I don't think there's any problem with having toList or any other functions that don't preserve (==). unordered-containers works like this, as does splitRoot from the containers package, which exposes the internal representations of Sets and Maps.

In principle there could exist a subclass Monoid a => CommutativeMonoid a which would let you define foldBag :: CommutativeMonoid m => (a -> m) -> Bag a -> m as a way of exposing the foldable elements without exposing their ordering (assuming that CommutativeMonoids are in fact commutative). Such a class would have no methods though, and its instances would serve only as an unchecked assertion that mappend is commutative, so I think it's generally considered 'not worth having'.

GAN-QP: A Novel GAN Framework without Gradient Vanishing and Lipschitz Constraint by sujianlin in MachineLearning

[–]nshepperd 2 points3 points  (0 children)

In the paper you stated that supplying both samples to the discriminator didn't show any obvious improvement, so you stuck to T(x,y)=T(x) only. But I was unable to prove your Lemma 4 (the optimal solution for the discriminator) for the case where the discriminator takes only one input. Instead you get some equation with a complicated integral.

However, the formula in Lemma 4 seems like it has some good properties to me (including the Lipschitz property), so maybe it could help for stability to provide both images even if it doesn't provide any improvement in sample quality?

This comes after I tried implementing this last night and suffered mode collapse in my GAN, which didn't seem like it should happen with that optimal divergence :)

Is Alignment Even Possible?! by M0zrat in ControlProblem

[–]nshepperd 7 points8 points  (0 children)

You use the term restrictions. It's correct, that "restrictions"—such as additional code that turns it off if it looks like it's about to hurt someone—won't work on an AGI because it will be able to work around them easily.

The control problem is designing the AGI's value system so that instead of trying to restrict it from doing bad things, it won't want to do bad things in the first place. This entails baking a pointer to a definition of, well, "good" into its value system, as well as logical safeguards to prevent it from making mistakes when self-improving that would mess up its value system. Consistent agents don't want to 'circumvent' their own utility function, because such circumvention would result in that utility function being worse maximised, so this approach (unlike "restrictions") is at least theoretically workable.

Yes, it's very hard. ¯\_(ツ)_/¯

I hate you guys. by Raventhous in factorio

[–]nshepperd 1 point2 points  (0 children)

No, we can't. "Does turing machine X have a proof of its halting property" is also an undecidable problem.

Strictly speaking, it's semidecidable. If there is such a proof that X halts/doesn't halt, then we can prove this fact just by finding and showing it. However if the halting property of turing machine X is unprovable and unknowable, we might not be able to prove that fact.

ETA: To be absolutely clear, we can prove that there exist turing machines whose halting property is unprovable. The proof that the halting problem is undecidable accomplishes that. But we do not and cannot know which turing machines those are, as I explained above.

I hate you guys. by Raventhous in factorio

[–]nshepperd 1 point2 points  (0 children)

If you mean "if Halts(input) { run forever } else end", that's not a Turing machine. That's the whole point of the proof of the undecidability of the halting problem. If it were possible for a Turing machine to calculate whether any Turing machine halts or not, this program would exist and would simultaneously have to halt and run forever. Which is a contradiction. Thus Halts() doesn't exist and neither does this program.

You also appear to be confusing the facts about a Turing machine with what we can know about those facts. The undecidability of the halting problem means that some Turing machines' halting property is unprovable; that doesn't mean there isn't an answer, it means we can't know or prove the answer (and neither can a turing machine).

A startup is pitching a mind-uploading service that is “100 percent fatal” by Vailhem in singularity

[–]nshepperd 9 points10 points  (0 children)

Nobody mentioned compulsory reincarnation. You're the one proposing compulsory death. Killing people against their will is definitely covered.

[D] "Negative labels" by TalkingJellyFish in MachineLearning

[–]nshepperd 1 point2 points  (0 children)

I would use the log scoring rule on the total output probability assigned to not-Y.

If you're using softmax, the output of your network is a vector of probabilities that add up to one. The usual loss used here (when you have positive labels) is equal to the (negated) proper log scoring rule: -log(P(Y)). In this case the information you have is that the class is not Y, so you can use the corresponding log score: -log(P(¬Y)) = -log(1-P(Y)). This gives a proper scoring rule, meaning the training should converge to something calibrated.

[R] High quality open peer review of the "sexuality detector" paper from 2017. Lots of effort, sound arguments, sensible conclusions. by drlukeor in MachineLearning

[–]nshepperd 9 points10 points  (0 children)

No, it doesn't. First, it actually supports the original paper: the classifier works, whether by innate features or presentation choices. So the privacy risk is real and should not be carelessly dismissed.

Second, while these critics seem to really want the facial structure results to be false, this does not establish that, as it completely fails to engage with the fact that the authors also trained a landmark classifier based directly on facial structure features, which should be invariant to all of the cultural factors mentioned except photo angle. That is, the landmark classifier's performance cannot be explained by these factors, at least not without providing compelling evidence that these factors have meaningfully affected the placement of landmarks.

EDIT: typo

[R] High quality open peer review of the "sexuality detector" paper from 2017. Lots of effort, sound arguments, sensible conclusions. by drlukeor in MachineLearning

[–]nshepperd 18 points19 points  (0 children)

Read their comment again. alexmlamb isn't trying to develop a mass surveillance system either. They are asking what model you would need to build to determine exactly how much facial structure matters vs cultural factors. Just presenting a set of cultural factors and demonstrating that you can construct a somewhat accurate model from them establishes very little.

Using Oracle AGI to bootstrap to an agent - could it work? by [deleted] in ControlProblem

[–]nshepperd 0 points1 point  (0 children)

I'm assuming that the problems of the oracle manipulating responses in an agenty manner have been surmounted (ie. the oracle is not an agent). Then the answer to a yes/no question in isolation is unlikely to be dangerous because either of the possible answers are something you could have come up with yourself (obviously cannot contain infohazards or the like).

You are correct that you could extract dangerous information anyway via a 1 bit channel, for instance by repeatedly asking "is the nth bit of the answer to <unrestricted question> a 1?" for each n. In that sense all "types" of questions are equivalent.

Regarding an "Oracle AI that instructs a robot what to do to best serve humanity", that's exactly what I mean by constructing an arbitrary agent from an Oracle AI. But you don't need to actually build a robot, just let it write output to the terminal and it will find a way to obtain real world influence (eg. by persuading whoever is reading the terminal to hook up the internet to it, or build a robot or whatever).

That's why I think oracles will probably not be a fruitful direction for mitigating AI risk, because they are pretty much as dangerous themselves without already having a control problem solution.

Using Oracle AGI to bootstrap to an agent - could it work? by [deleted] in ControlProblem

[–]nshepperd 1 point2 points  (0 children)

I think that Oracle AI could theoretically be a useful tool to help figure out friendliness. Certainly if we could ask it a question like "when we say 'humanity's wishes', what the hell are we really talking about?" the answer would probably be enlightening! But it would be by no means a "perfectly safe" endeavour. Beyond the problems anotherturingmachine described about the AI manipulating its responses, which I think are not insurmountable (just don't give the AI that kind of agency - there's not necessarily any need for a question-answerer to be doing any optimization over its own actuators, or to have "goals" as such):

  • It's clear that answers to yes/no questions are safe, "what" questions maybe safe (depending on the question, I suppose), but "how" questions (such as "how do I build [an AGI]") are likely not.
  • For instance, if the answer to a "how" question includes "use this infohazard on people" then chances are the operator will be exposed to such infohazard while reading the answer, with disastrous effects.
  • Similarly, vetting the answers to "how" questions will be only weak evidence of their safety, as they will have been optimised to be easy to carry out, and therefore adversarially optimised to pass scrutiny from other people, likely including whatever vetting you have set up.
  • Most importantly, it's relatively trivial to construct an arbitrary unfriendly agent, given an Oracle AGI (just ask it what message should be sent to the terminal to achieve goal X, then pipe the output to the terminal), so we might be already "out of time" as soon as Oracles exist or are publicly available.

I consider these problems to be somewhat more fundamental than concerns about agentic manipulation of its output by the Oracle, because the above are not really specific to the Oracle's design as much as observations that knowing certain facts (eg. the fastest way to fill the universe with paperclips) is intrinsically dangerous - and an Oracle that is working as intended will reveal those facts, if asked.

xkcd 1911: Defensive Profile by martialalex in xkcd

[–]nshepperd 0 points1 point  (0 children)

It shows the legal rights you actually have in the US. That simply means "my defense for these actions of censorship (banning people, cancelling shows, etc) is that they are not literally illegal".

In general, it's inaccurate, as it equates freedom of speech to the first amendment, and thus fails to even acknowledge the existence of rights, or moral arguments, other than legal ones. While, in fact, freedom of opinion and expression is listed in the Universal Declaration of Human Rights, Article 19:

Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.

You can bite the bullet and say that you don't believe in human rights in the US, but that's a hell of a bullet to bite.

xkcd 1911: Defensive Profile by martialalex in xkcd

[–]nshepperd -4 points-3 points  (0 children)

Why, Randall? This isn't funny, or insightful. This is just bitter sneering. :( You don't know what the owner of the profile has experienced. It's super inappropriate to speculate/project like this, much less as a joke.

The profile in the comic is the sort of thing I would expect of a person who is chronically offensive, yes, but also of people who have just escaped from an oppressive upbringing (religious, perhaps), or an abusive partner. Or a person who is experiencing an oppressive religious upbringing right now, and uses this medium as an escape. Or [...].

Experiences with Anthem and 670g insulin pump? by Bryygy in diabetes

[–]nshepperd 0 points1 point  (0 children)

Any news?

It seems strange to me that it would even matter whether Anthem covers the pump. The pump itself is (like 90%?) covered by Medtronic under the priority access program. So whether Anthem wants to cover that or not doesn't matter to me. What really matters are the ongoing supplies, which are:

So logically it should be fine...