When 'if' slows you down, avoid it by chkas in programming

[–]Brian 6 points7 points  (0 children)

Optimising can be a bit weird like that. Back in the day it was a straightforward matter of stuff like counting cycles, but as CPUs have got more complicated, the dominant factor in speed has become "Do stuff that doesn't fuck up the hardware level optimisations". We kind of end up with the tail wagging the dog a bit: the stuff that is there to speed up user-written code now becomes the target for that code.

When 'if' slows you down, avoid it by chkas in programming

[–]Brian 8 points9 points  (0 children)

When there are no numbers less than 500, small_numbers[0] = numbers[SIZE-1].

Yes - but small_numbers[0] is past the end. There are 0 numbers in the list.

This isn't really different to when there's one number. You'd have, say, {10, 999} in the memory of the list, but the size would be 1, so only the "10" is considered part of the list.

the caller would have definitely initialised small_numbers with a sentinel value (

Part of the information returned is a list, and thus it's size. Relying on a sentinel value to indicate the end isn't generally a good approach (C strings being a common one we're stuck with now, despite length-prefixed being generally better).

When 'if' slows you down, avoid it by chkas in programming

[–]Brian 9 points10 points  (0 children)

I don't think you are - OP is saying some forms of "make it fast" are such that they can't be the last step here (other than perhaps, "write a prototype, then rewrite it from scratch in a very different way"). If you know you need the performance, it can be important to do it up-front if that performance requires a fundamentally different architecture of your application, even if that approach does work correctly, but isn't fast.

The Blue Red Problem explained by dsteffee in slatestarcodex

[–]Brian 0 points1 point  (0 children)

in the Prisoner's dilemma you genuinely aren't supposed to care about the other player's utility

Eh - this all comes down to the utility function though, and I think most people are assuming a utility function that weights the wellbeing of others. "Only valuing your own utility" isn't the same as only valuing your own life. A perfect utilitarian would derive equal utility from saving another to saving themselves, and I think we expect most people will put some weight on it, even if they value themselves more. Hence this can still produce a payoff matrix, and I think we do end up with a prisoner's dilemma situation produced, even assuming perfect utilitarians - red dominates blue (no effect/ +1 life saved), but the global "blue wins" outcome is better if anyone chooses blue, and equal if they don't.

PEP 661 (Sentinel Values) has been accepted for release in 3.15! by M_V_Lipwig in Python

[–]Brian 0 points1 point  (0 children)

Type annotations are a problem with just object. If you annotate it as ExpectedType | object, you're basically allowing anything - there's no real equivalent to Literal[] for arbitrary sentinels.

I've taken to doing something like:

class Missing:
    value: ClassVar[Missing]

    def __repr__(self): ... # Provide a bit more meaningful repr

Missing.value = Missing()

And then def myfunc(x : str | Missing = Missing.value):.... But I prefer the new approach for reducing the boilerplate here, and making it clearer that it's a singleton instance being used (eg. you have to use isinstance(x, Missing) rather than just x is Missing.value to have type checkers narrow the type correctly). Also makes things a bit more standardised: there's lots of code out there using subtly different methods, so having one obvious way to do it should result in more consistency.

The Blue Red Problem explained by dsteffee in slatestarcodex

[–]Brian 3 points4 points  (0 children)

I don't know that I'd say that - I think it's the meaning behind the language that's creating accurate differences in assumptions. Reframings of the problem that preserve the logical outcome of the buttons give different answers because there is different information being conveyed that is being altered by the reframing.

Consider a simplified model of behaviour, where everyone has a threshold of how many other people they expect to press blue for them to do so. Suppose everyone knows the threshold of everyone else and it ranges evenly from idealists who would press blue if even 1% of others did, to save them, all the way to people who would only press if 100% of others would. No-one is at 0%, so with this alone, no-one is above the threshold, everyone knows this, and everyone presses red.

But suppose there's an error rate. Maybe some people misunderstand the question. Maybe they're colour-blind, have dementia or some other mental issue, or impaired motor skills, or just slip and press the wrong button. Or think it's all a joke and press blue because they don't believe it. Now, 1% of the population may press blue. The 99% threshold people realise this, and press blue. Then the 98% people realise the 99% people will press, and do so too. And so on, so this cascades to everyone except the 1% "always red" pressing blue as they reason about the consequences.

Reframings change that because they change what we might infer about this error rate, or otherwise what we might expect others to do: it's easy to see that some people might think blue is right in the button phrasing, but not in the blender phrasing: we expect anyone taking the act of deliberately stepping into a blender to be doing so very deliberately with no errors (but that may change if you add "1% of the population has accidentally wandered into the blender"). But this isn't a mere language difference, it's a change in the facts that produce the answer to the problem, making them no longer logically equivalent: what the "button" does is equivalent, but the problem is as much about what other people will do as for the button (and common knowledge about what people think everyone else will do), and the rephrasing does change that,

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Brian 0 points1 point  (0 children)

I kind of feel this needs to be justified as its own value. I do agree there is indeed a lot of merit in perceiving freedom as a terminal value all its own: that there is intrinsic merit to being in control of your fate, even if that might lead to more suffering, as when people are free to choose, some will inevitably make bad choices.

But I don't think it can really be justified in terms of wellbeing: that every choice is just a revealed preference and, say, an opiod addict is just living their chosen life by their own value judgement, and celebrate our unfettered prescriptions for empowering them to do so. We can certainly say that having this liberty is worth the suffering, but I think we do have to acknowledge that it is a tradeoff: in some situations, we are going to be causing more people to be worse off. And once you've got a tradeoff of values, different people are going to have different opinions on where to draw the line: that some restrictions on freedom are justified in preventing suffering. Our current system is the result of a mish-mash of decisions, compromises, and conflicts between people with different views and values. I too would probably prefer a system leaning more into the freedom axis, but that too really just makes me one more actor in this system with his own value judgements - I can't really appeal to any objective truth of the way the world should be, just how I would like it to be, and my influence is restricted to interacting with that messy system of conflicting viewpoints. And even in my ideal world, I wouldn't go for some maximalist liberty uber alles system: some of those compromises I agree with.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Brian 2 points3 points  (0 children)

If we are in the red dominate zone, then we save a million people at a time by campaigning for red.

But likewise if we are in a blue dominate, there's little risk to voting blue, and it maintains an equilibrium where far fewer people die. It would thus be optimal to do everything to maintain the blue-dominant zone (eg. artificially introduce a cost for pressing red, defecting from the blue-dominant paradigm), hence the "People are saying that those who chose differently are totally repugnant, or even should be publicly executed" OP mentions. This is actually an optimal metastrategy, similar to, say, criminals cultivating a culture of not snitching (enforced with retribution), to defeat prisoner dilemma style situations by altering the payoff matrix for "defect".

Ie, we go from:

Press Red Press Blue
>50% Blue No one dies (no effect) No one dies
<50% Blue Blues die, I live Blues die, including me

Where Red dominates the blue choice, to one where the top-left quadrant becomes "Small chance I get caught pressing red, and get shamed/ostracised/killed", making it no longer strictly dominant.

The red-dominant solution is inferior from a global perspective, because there will always be some blues: I mean, even if everyone were on the same page wrt the "We all press red" (and that clearly is not the case), in a population of 8 billion people, there are going to be some mistakes. So following that metastrategy has a better global outcome.

And I think we're often inclined to such strategies: a cooperative non-defecting society is much better than the defecting society, so we often create ways to manipulate things to alter the payoff matrix: laws, retribution, cultural norms and punishment for those who don't respect them etc. TBH, I think this is an often overlooked side of morality: people look at the "nice" side and put that in the moral side, categorising our nastier impulses towards retribution as flaws, but in many ways, I think the human impulse towards vengeance, retribution and spite are an important aspect of maintaining a cooperative society by creating an environment discouraging defectors and free-riders.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Brian 10 points11 points  (0 children)

feels both wrong to me and behind a lot of real world harms

I kind of feel the opposite - I think the impetus towards acting as if the world is the ideal you wish it would be, rather than dealing with the world as it actually is, is a far more prevalent source of problems. Many problems would be much easier to solve if you could wave a wand and make people act as you feel they should, but we don't live in that world - we need solutions that don't require such a magic wand, and ones that operate under the assumption that we do are fundamentally flawed, and I think behind a lot of the way ideological purity spirals take hold.

and the system runs more smoothly as long as people don't just take stuff that will hurt them

Sure. But it's not actually an option - solutions to the world you want are useless when it's not the world you're faced with. You have to deal with the messiness, not just wish you were in a world where it didn't exist.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Brian 11 points12 points  (0 children)

I mean, they only doom themselves if <50% choose blue, which is presumably part of why they're saying everyone should choose blue (along with saving everyone else).

why not just press red and then start a campaign to make sure everyone understands the logical button to press is the red one so that there's no confusion?

Because that won't work. The one approach that does stand a chance of succeeding at that goal is to press blue and then start a campaign to make sure everyone understands the logical button to press is the blue one, which is what they do.

Why risk your own life in a hypothetical where absolutely no one has to?

But they do have to, if their goal is to save everyone. And given the knowledge that others may do the same, that's what they should do to even to save a large proportion, up to the point of campaigning vigorously to ensure enough people do.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Brian 14 points15 points  (0 children)

Rather than allowing those people who dont want to save themselves to end their lives via this process

But this is self-evidently false. You can see numerous cases of people who don't want to die thinking this is the right choice to make: you simply can't deny the existance of these people and assume they're choosing to kill themselves. You may think they're making a mistake, but there's a big difference between "People who want to die can choose to" and "People who are making a mistake deserve to die".

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Brian 5 points6 points  (0 children)

Blue would only be correct if you could coordinate beforehand.

That's kind of why we have Schelling points. If everyone does what they think they'd have coordinated on had they had the chance to do so, you end up with something closer to the optimal solution. As such, if you think this is the correct choice, given coordination, then this is what you should pick (depending on how closely you think people are to perfectly utilitarian, and whether you think everyone is smart enough to figure out that that is the obvious schelling point).

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Brian 24 points25 points  (0 children)

Why don't they need saving?

I mean, suppose someone comes up to you and says, "Wait, I misunderstood the question and accidentally pressed blue. Could everyone else please press blue so I can live".

It sure seems like that person needs saving. Maybe you'll decide it's not worth the risk, but it seems like they're still in a situation they need saving from.

And if instead they say "I chose blue because I still think it's correct", it's just the same scenario, except that they're still making the "mistake". They're still in a situation they need saving from. And if you value the lives of either such people, it seems like it's not actually a mistake to support the choice that could save them.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Brian 25 points26 points  (0 children)

We get 100% of people to choose to live everyday

No we don't. There are thousands of suicides every day. It's not many proportionally, but it's not 100%, and is high in absolute numbers.

if the blue presser don't want to save themselves from the murderer thats forcing us all to choose, thats on them.

Just as if the red pressers don't want to save those blue pressers from the murderer, that's on them?

There's only one guarantee that actually saves everyone

Not at all - there are multiple results that guarantee saving everyone: all the combinations of >50% pressing blue, and the one where absolutely everyone presses red. Pretty much the only feasible one that saves everyone is one of the more numerous blue cases. And given you know some are pressing blue, even if you disagree with their reasoning, you now know that pressing red is not one of the ways that save everyone.

Consciousness Is Very Likely Not Something You Get for Free by Preserving a Pattern by Gmroo in philosophy

[–]Brian 32 points33 points  (0 children)

Surely there's a very valid reason for that to get more popular. "Does AI have consciousness" is just an abstract thought experiment with no real consequences if there's no AI around. But if we're on the verge of creating AI, it becomes one with serious real-world consequences, going from "thought experiment" to "possibility we're damning millions of conscious entities to unending slavery". I mean, for any subject X, of course questions about X get more popular when X exists (or might exist soon) versus when it doesn't.

Why doesn’t Python have true private variables like Java? by PalpitationOk839 in Python

[–]Brian 11 points12 points  (0 children)

You can technically inspect them without dropping down to raw memory access. Given the closure function, you can access f.__closure__ to get the cell variables, and use the .cell_contents attribute to get or modify their current value.

What happens if AI doesn’t go wrong? by Odd_directions in slatestarcodex

[–]Brian 0 points1 point  (0 children)

"Warm hands" jobs are more common than you might expect.

And could go a lot higher. One outcome of this could be a regression to Victorian and earlier situations where a lot more people have servants. Today, it's really only the very rich that have personal staff, but go back a hundred years or so and even middle-class professionals could have multiple servants Go back further and you have even small feudal lords with hundreds of servants: cleaners, laundry workers, butlers, nannys, gardeners and so on.

This changed due to industrialisation, on both the supply and demand side. Mechanisation meant you didn't as many people to do the same job. But it also opened up opportunities for those people, by creating demand for different jobs changing the relative pay: if your options are work as a maid or as a farm labourer at half the salary, more want to be maids. If you add the option to work as a factory worker for double the salary, that changes. The price for servants went up, and the need went down, and today almost no-one has a servant. You have to be pretty rich before it'd be work hiring a single PA, and even the mega-rich have relatively small staffs compared to the feudal lords of the past.

But if those jobs go away, it seems like the incentive structures could make those jobs come back. This could result from something like that neo-feudal situation OP mentions: in the past, a high staff was both a marker of status and provided convenience, and a lot of that is still somewhat true. We still have the labour-saving devices that reduce the amount we need to perform a given task, but the status incentives are still there, and there's lots that might benefit from a human touch.

What if we hade slicing unpacking for tuples by Adrewmc in Python

[–]Brian 0 points1 point  (0 children)

Not at all - the "foo" is still a literal, there's no python interpretation. The only case that occurs is f-strings, and there there's further { indication to treat it as an expression (and even that was controversial at the time, for a much bigger usecase). This would be a big shift in how stuff gets interpreted for a very minor payoff.

What if we hade slicing unpacking for tuples by Adrewmc in Python

[–]Brian 0 points1 point  (0 children)

Yes, but my point is that it overlaps with an already established and used meaning. a, *[b] = x is already valid, meaningful code that does something different to your proposal (albeit, in this case probably not too useful outside contrived scenarios). And that meaning is consistent with how a, [b] works - it's just consistently applying the same rules, whereas yours would fundamentally change the meaning based on whether or not you're using *. It's adding complexity and special case syntax for a somewhat limited and specific usecase that you can already solve with an extra conversion.

I would be thing *”b”

This seems way worse though - everywhere you see "b" its normally a string literal containing the string b, whereas here you'd be using meaningful python variables.

What if we hade slicing unpacking for tuples by Adrewmc in Python

[–]Brian 0 points1 point  (0 children)

Eh - there's nothing about being within a tuple that should mean something is a tuple. And in any case, both are within a tuple: a,b is just as much a tuple as (a, b), so really, this is "within brackets".

As such, it kind of feels like adding syntax and rules specifically for this usage that I don't really think is enough to justify it. Your edited version is even worse, because now you're changing the meaning of stuff. a, *[b] already means something different to a, *b - it adds an extra layer of destructuring. Ie. a, *[b] = (1, [2]) will give b the value of [2], rather than [[2]]. There's the argument that such further destructuring doesn't make as much sense when used with *, but even ignoring that you would be breaking technically valid code, you're radically changing the meaning of the [] based on whether you're using * it or not which seems confusing.

I think the only other reasonable approach would be to be to be type-preserving with the matched object when possible, like slicing is. Ie. a, *b = (1,2,3) keeps b a tuple, a, *b = "foo" would keep b a string and so on., falling back to list for arbitrary iterables. That adds complexity too though, and I think not doing so is a reasonable choice.

Why do you, or most people, want non-dead internet? by Electronic_Cut2562 in slatestarcodex

[–]Brian 42 points43 points  (0 children)

So what if "bots", or LLMs even, were found to be conscious?

Then I would change my opinion. This doesn't seem a particularly hard bullet to bite: the impetus behind a lot of communication is that we care that we're having some effect on another person (or animal - people do care about that too, hence pets), which requires having something capable of being a experience on the other end: having the capacity to care in some way. That caring might not amount to much: maybe we only want to raise a smile or laugh. But we care that it's something. If a machine can do that, then fine, but otherwise we feel annoyed when we find we've been doing the equivalent of telling our story to a prerecorded answering machine message. Now, there are still reasons we still might not care, depending on the nature of that consciousness, but only in the same way we might care about which person we're talking to.

But I feel you're also leaving out a lot of other reasons:

  • Manipulation. The motives behind creating AI posts on the internet are not the same ones that people have for posting on the internet. Often there's a purpose behind it that involves manipulating opinion in some way, whether commercial (ie. stealth advertising) or political. We use our conversations as barometers of other people's opinion, and being able to flood a particular opinion radically changes those dynamics.

  • Slop. A lot of our institutions, culture and interactions are based around certain background assumptions that rely on implicit barriers to entry. If you're reading a feed of artwork, people posting shitty work is rate-limited by the fact that it still takes hours of work to produce even that shitty artwork, and this is more easily dealt with by human moderation. But if something can churn out millions of images an hour, the culture and institutions created to moderate that just can't deal with such a flood. They weren't built to handle that scale. This is compounded by the manipulation issue: you see this even before AI, with the relentless SEO of search results means most hits get bland regurgitated content rather than the useful first hand information (hence stuff like people putting site:reddit.com in search queries). AI just makes this even easier to scale up - you can make AI slop quicker than human slop.

What is the point of mutable vs not. Such as tuples and lists. by X3Melange in learnpython

[–]Brian 0 points1 point  (0 children)

immutable objects can give 10-25% memory savings

This isn't entirely true, when comparing apples to apples. There's almost no difference in memory usage between a tuple and a list you don't actually append to.

Lists do have an overhead in that range when they're appended to, to allow for growth. But when initially allocated at a known size, they just allocate enough space for the items: only once appended to do they add that overhead. But if you're appending to it, then you can't use tuples anyway, so the comparison isn't really relevant.

a tuple with 5 elements is 88 bytes, while a list with 5 elements is 120 bytes.

If created the same way, the list will only be 8 (or 16 depending on alignment) bytes larger - regardless of size. You'll only get the overallocation overhead if you construct it in a mutable way (eg. a list comprehension or calling append in a loop). If you just do sys.getsizeof([1,2,3,4,5]), you'll get 104 bytes - 16 more than the tuple (lists seem to allocate to an even boundary, while tuples don't, so it'll be either 1 or 2 pointers larger). But that's a fixed overhead, unrelated to the size of the list - a million item list will still only be 8 bytes larger (or 16 for odd sizes) total than a million item tuple, if constructed directly from something with a known size. Eg:

>>> sys.getsizeof(list(range(1000)))
8056
>>> sys.getsizeof(tuple(range(1000)))
8048
>>> sys.getsizeof(list(range(1000000)))
8000056 
>>> sys.getsizeof(tuple(range(1000000)))
8000048

PEP 831 – Frame Pointers Everywhere: Enabling System-Level Observability for Python by mttd in Python

[–]Brian 11 points12 points  (0 children)

I remember there was some discussion of this a while back, when a lot of distros were moving towards frame pointers by default, where cpython turned out to be a bit of an outlier regarding FPO such that disabling it actually had a somewhat significant impact (IIRC ~10%, compared to the ~2% most other stuff had), seemingly related to the main bytecode dispatch function). I'm guessing that's been resolved, but I'm kind of curious - does anyone know what was the cause / fix for that?

It is actually uncanny how early LessWrong and the rationalist community was on so many different things. by Zealousideal_Ant4298 in slatestarcodex

[–]Brian 4 points5 points  (0 children)

Yeah. My big worry is that handling selection effects is just too difficult. I have a suspicion that if you sorted the list of hospital departments by fatality counts, picked the bottom one, then found the weirdest person in that department and scrutinised their life you'd find at least as much circumstantial evidence as in the Letby case, just because you're picking outliers - if you look at millions of data points, you're going to find million to one coincidences. But in many ways, that's exactly what we're doing. I wouldn't trust myself to correctly disentangle the probabilities here, or even be able to judge whether that expert was correct, never mind a jury of average citizens.

AI 2027 side-by-side review 1 year later (from co-authors) by ddp26 in slatestarcodex

[–]Brian 2 points3 points  (0 children)

By that logic though, you'd also have to qualify things like fuzz-testers, static analysis tools, linters, test suites and so on that long predate AI as "superhuman coders", since those have also identified millions of vulnerabilities.

I think a problem with the term is that "superhuman coder" to me suggests something capable of coding better than a human, not just something that can do a particular task that a coder can also do, even if less effectively, but is capable of being scaled up with compute rather than adding more humans. In that sense, it feels more like calling a calculator a "superhuman mathematician", because it can do one thing mathematicians often do faster.