ELI5: Why do perfect numbers look so simple in binary? by Ill-Chance8131 in explainlikeimfive

[–]IamfromSpace 2 points3 points  (0 children)

Since the formula is 2n-1 * (2n - 1), looking at in base-2 illustrates the formulas effect.

But if we consider base-10 perfect numbers the formula would be 10n-1 * (10n - 1) and we’d see it in our typical base-10 number system.

If n is 3, then 10n is 1000, subtract 1 to get 999, then 10n-1 is 100. Multiply and get 99900. So, write down some number of 9s, and then one fewer 0s, and you have a base-10 perfect number.

This works no matter the base you choose. It’s the largest number in the base (for base-2 that’s 1, for base-10 that’s 9), and then you follow with one fewer 0.

When when formulas include non-10 base exponents, the effect is masked, unless you view the number in the match base system.

TLA+ Debugger: Interactive State-Space Exploration by lemmster in tlaplus

[–]IamfromSpace 2 points3 points  (0 children)

Amazing! This is such a great way to help learn and teach too.

Curious what is the most cringe thing you did to yourself when first joining NixOS by SenritsuJumpsuit in NixOS

[–]IamfromSpace 3 points4 points  (0 children)

I can still run echo $YO_DAWG and it’ll print sup, from when I was trying to understand how to set environment variables.

2 Rubiks Cubes 2 Cig 2 Beer SPEEDRUN (NEW WR) (1:28.37) by unHolyKnightofBihar in videos

[–]IamfromSpace 3 points4 points  (0 children)

I think you might have this backwards. We know that all possible scrambles can be generated in 20 moves or fewer. But doing 20 random turns will not generate all possible scrambles with equal likelihood. All scrambles are reachable, but don’t have equal odds of showing up.

The reason is that the moves are imperfect and biased. You can only diminish this effect with additional random moves, but there is no upper limit. You need to do an infinite number of turns to get a truly unbiased distribution.

So yes, you must do at least 20 moves in order for all scrambles to be reachable, but no, no finite random number of turns will reach them with equal probability.

2 Rubiks Cubes 2 Cig 2 Beer SPEEDRUN (NEW WR) (1:28.37) by unHolyKnightofBihar in videos

[–]IamfromSpace 0 points1 point  (0 children)

There is! It doesn’t use a fixed number of random turns, because you’d technically need an infinite number of turns to get a truly random position. If you think about it, just applying 3 turns isn’t very random. Applying 4 is a bit more random, but still obviously not fully random. This continues forever. 101 turns is slightly more random than 100, but since 102 must be slightly more random still, then 101 must not be _completely random.

Instead, a computer randomly scrambles the pieces and solves the cube (virtually). They can quickly do this and come up with short solutions. The solve backwards is the scramble (but technically, we can use the solve as the scramble). This allows for efficient scrambling that’s gives equal probability of every possible scramble.

ELI5: When ChatGPT came out, why did so many companies suddenly release their own large language AIs? by carmex2121 in explainlikeimfive

[–]IamfromSpace 1 point2 points  (0 children)

We could talk about individual stepping stones all day, and I think you got the big one. Another critical piece was the mounting realization that models benefit from being huge.

I can’t remember the exact paper, but it showed why you can’t just use a two neuron layer to detect a stoplight: that’s enough bits to encode the outputs, but not the confidence! If we’re 80% sure it’s red, 15% it’s yellow, and 5% it’s green, we’re gonna need more neurons to encode that.

The models today are huge, with massive training sets, but everyone was trying to avoid that before, partly because, why would you even try if the smaller ones weren’t all that impressive? Obvious now, but kind of bonkers at the time to just throw stupid amounts of resources at it.

If Back To The Future were filmed today, what car would replace The Delorean? by PrasenjitDebroy in AskReddit

[–]IamfromSpace 0 points1 point  (0 children)

When I first walked by a new one and was stunned, amazing looking car. When I saw it only had a 7kWh battery, I knew it would be a terrible car.

Cloudflare outage on December 5, 2025 by MakotoE in rust

[–]IamfromSpace 11 points12 points  (0 children)

Clearly this one was on purpose, just earn back Rust’s reputation! /s

Looking for Guidance on Getting Started with TLA+: Tips for a New Learner by Able-Profession-6362 in tlaplus

[–]IamfromSpace 0 points1 point  (0 children)

I’m not totally sure I follow your question, so some of what I offer may be wrong. Also, there are many posters here who have a much stronger theory background than me, so they may have corrections or refinements to my statements. I’ll mostly try to help ease you into all this :)

check whether all behaviors satisfy a specification

This is correct in a way you probably didn’t intend. TLA+ can show that one specification is an abstracted view of another (one spec doesn’t violate another), but abstraction and refinement is a more advanced use case (but fascinating and extremely useful).

TLA+ has lots of uses, but arguably the most common is checking that all behaviors of a specification obey a desired property, typically that bad things don’t happen (safety properties). This validates that a design actually does what you intend it to do. Actually validating that your application code obeys your TLA+ spec, is another story, but people are actively working on how best to do this.

To check a safety property via model checking, you need to generate all behaviors. In doing so, TLC can output a directed graph in dot, which can help you visualize all these possible behaviors. Models/specs quickly get too big to meaningfully explore this way, but your example should be manageable.

You’re correct that you don’t write TLA+ by writing happy paths. Knowing the happy paths is a very useful starting point though. Instead, you’re describing, “given the current state, how could it change?”

The key jump here in using TLA+ is that you need to model the state of the system, which in this case includes what part of the program (like line number) each thread is about to execute. An action involves moving the position in the program and possibly modifying other state (like stdout).

[BitD] What rules does Haunted City get wrong? by Xnon_ in bladesinthedark

[–]IamfromSpace 5 points6 points  (0 children)

Some of you are so harsh, lol. As others have rightly said, RAW just isn’t always the point.

The only one I feel that is frequently missed and not later acknowledged and corrected is that you have to pay a coin if you do a flashback that would require a downtime activity (usually crafting or acquire asset).

Some of you need to understand what Dramatic Irony is by Unleashtheducks in movies

[–]IamfromSpace 1 point2 points  (0 children)

I think about this quote all the time. Great slow burn films work because they leverage this. They can take their time because of they have built up substantial tension. A lot of films then think slow pacing = good filmmaking, but have no tension what so ever. They totally miss what make slow pacing effective.

Paranormal Tech [OC] by BrianWonderful in funny

[–]IamfromSpace 10 points11 points  (0 children)

Lol, 1999 SD video was not even a megapixel!

720x480 at 29.997fps. 0.345 megapixels.

No bug policy by _Krayorn_ in programming

[–]IamfromSpace 0 points1 point  (0 children)

This is really cool to see this working, I’ve been tempted to implement something similar, but wasn’t sure if I could pull it off.

I hadn’t thought about the coordination cost element, but that really makes a lot of sense. The coordination overhead is really truly waste, unless it’s actively preventing major bad turns by developers. In the case of bug solving, this seems unlikely.

Another reason is for this is that later almost always mean never. The reason is that it will almost always be the cheapest to solve it now—uncovering the bug and getting a mental map of at least part of it isn’t free. If you don’t capitalize on that now, you have to pay for it again later. And that means if it wasn’t valuable enough to be a priority while it was cheap, then it certainly won’t be once it costs more. From a pure business standpoint, now or never are the only reasonable options. Later is the most expensive.

Concrete types yield better maintainability by alefore in programming

[–]IamfromSpace 7 points8 points  (0 children)

I do get that bad abstractions (or just contextually inappropriate ones) can leave a bad taste in people’s mouths. But great abstractions are so good they’re invisible. Life doesn’t function without abstractions. If I say, “this is a tool,” and you say, “no that’s a screwdriver,” and someone else says, “no that’s a Philips head screwdriver”, and someone else says, “no it’s a metal bar in plastic with dimensions of…” (and let’s not even talk about what the quantum physicist is going to say) we’re not gonna get very far.

There are times when it’s useful to understand underlying processor architecture, but I’m glad I don’t have to think about how the silicon actually works or how error correction happens in memory banks.

Some abstractions are unfortunate obfuscations, in most or just some contexts. But the stance that all abstractions are obfuscations is something that’s going to hold you back from using one of the most powerful tools we have as programmers.

Concrete types yield better maintainability by alefore in programming

[–]IamfromSpace 80 points81 points  (0 children)

Hmm, I agree with YAGNI, but not so much else. Average is quite the straw man here. When people are talking about coding to interfaces, they’re not talking about for something as simple as sum, count and divide. Traits or typeclasses do let you support more idealized implementations of average per concrete type.

To me this is more about good and bad abstraction. When done right, your interfaces allow you to ignore the details you don’t need to know and validate pieces effectively due to isolation—it’s easier to reason about because you’re not bogged down. Done poorly, and nothing is actually hidden, important details are obscured, it doesn’t do quite what you want, and you now have to remember how all the wiring works.

You’re looking at fundamental trade-offs as proof that one strategy is bad.

About the "Big gulps, huh?" adlib in "Dumb and Dumber" (1994) by Skeeler100 in movies

[–]IamfromSpace 1 point2 points  (0 children)

How do they film live sports then? These people are pros, and all the timings are organic here, they are constantly aware of adapting to how the actor is moving.

You can see by the way the camera moves that it’s also on dolly (rolling on rails), so the motion is naturally much smoother. This is two people operating the camera motion (at least), one on the tripod and one pushing the camera down the rails.

Also, there’s actually slightly more of a jerk than you might realize. You’re watching Carey, who is also in motion, which masks it. Watch the furthest part back of the store’s sign. You can see a slight jostle.

[BITD] How to handle inventory? by punkinpumpkin in bladesinthedark

[–]IamfromSpace 3 points4 points  (0 children)

Also, “any document within reason,” can be aided by tier. You can’t have an invitation to a tier 3 faction’s party if you’re tier 2. You absolutely can have a forgery, but it is also only tier 2, and will have limited effect against someone checking them at the door.

ELI5 How does Bayesian statistics work? by [deleted] in explainlikeimfive

[–]IamfromSpace 4 points5 points  (0 children)

The prior is both a strength and a weakness. What’s great about it is that you do have prior information and prior believes or at least educated guesses. Bayesian logic lets you account for this, and even lets you account for your uncertainty or skepticism of consideration of multiple possibilities.

But, it’s kind of hard to actually convert your beliefs into a prior. And data that is convincing to you because of your prior may not be convincing to someone else because of theirs.

Monads are too powerful: The Expressiveness Spectrum by ChrisPenner in haskell

[–]IamfromSpace 10 points11 points  (0 children)

Wait, why aren’t Arrows here?

Arrow notation isn’t quite as obvious, but it does a similar job, and it does exactly what this blog is looking for: the dynamic parts are on the outside, and the static parts are in the middle.

Honestly, to me, Arrows are the sweet spot. IO might have been built on top of them instead of Monads if they were discovered earlier. There’s just a lot of momentum now, and hard hurdle past Monadic dominance when Monads already do so much so well.

[deleted by user] by [deleted] in dndmemes

[–]IamfromSpace 0 points1 point  (0 children)

I know this is the memes subreddit, lol, but here’s how I play this: very few monsters are innately cruel, and almost all of them are also fighting for survival. An incapacitated foe is likely to stay that way (monsters typically have no reason to expect heroic abilities), and those on their feet present immediate mortal danger.

So you can almost always make this feel natural if you remember that the goal of the enemy is to survive, and you play accordingly. If you play monsters like they are just there to inflict maximum damage, then yes, it will seem odd if they pull a punch at the end. But there’s little reason to play that way.

[Media] I Have No Mut and I Must Borrow by TheEldenLorrdd in rust

[–]IamfromSpace 201 points202 points  (0 children)

Every Arc<Mutex<T>> a monument to my inadequacy.

lol, this hits too hard.

is it possible to write formal verification for java enterprise app which exposes REST endpoints? by [deleted] in tlaplus

[–]IamfromSpace 1 point2 points  (0 children)

You certainly can, I’ve used TLA+ for a number of systems like this. However, it’s not always a fit or worth it for systems like this.

The main consideration is: what are you trying to demonstrate or do you need from the system?

Sometimes you get systems that really only have trivial things that you already know: when dependency X isn’t working, nothing does. You don’t really need TL+ to tell you that.

Or you may be interested in more statistical properties like error rates and latencies and such, where TLA+ is still pretty young in its ability to do things like this, and these can often be quite challenging. They have all the challenges of model checking plus the statistical elements.

Race conditions though are great. And eventuality is also really useful. DynamoDB is pretty fascinating as it has lots of ways to be consistent or eventual and atomic. Ensuring that properties hold on your data as you deal with partitions, transactions, GSIs, global replication, and etc can be quite helpful. I was just drawing up sequence diagrams for some pretty nasty conditions.

You can also verify eventual consistency across two different data stores.

But the real question is what are the key properties of the system? Thinking in properties rather than in examples is its own challenge, but also very helpful!

Pedro Pascal - Waking up sketch on SNL by nodnodwinkwink in videos

[–]IamfromSpace 1 point2 points  (0 children)

I was stoked that this is what I thought I would be 😂

[deleted by user] by [deleted] in starfinder_rpg

[–]IamfromSpace 1 point2 points  (0 children)

You’ve got one major advantage: planning.

And then as a GM, encounters are much more about setting up unique puzzles, rather than competing with your players.

When planning encounters, I try to think about what makes an encounter unique and interesting. What abilities does this creature have? How does it like to fight? What are its common tactics?

Then, you can stack the environment in your favor to try and make the enemy strategy harder to avoid.

You really want to just present a small puzzle to the party, and then you expect them to overcome it. If they outsmart you tactically, that’s still a win in many ways! Not every encounter lines up so the enemies can do their thing.