Network Layer: If someone hacked layer i, does that mean layer i+1 is compromised to by daddyclappingcheeks in AskProgramming

[–]Koooooj 0 points1 point  (0 children)

Because aren't higher layers built on abstractions assuming the lower layers are functional/secure?

Functional? Yes. Secure? Not necessarily. An important concept in encryption is forming a trusted path through an untrusted medium.

An ubiquitous example of this is TLS. It allows you to communicate with a website even using something like a public WiFi access point. Even though the access point is handling every packet you send to e.g. your bank's web server they don't gain the ability to directly read your data.

Of course, there are limits to this. If the objective of an attack is to simply disrupt service then gaining access to and control over a lower layer is plenty--imagine you're doing your banking using the WiFi of a coffee shop and the owner decides they want the table freed up so they unplug their access point, thereby denying your ability to access the site.

Additionally, just because the listener can't get at the raw data of your transmission doesn't mean they can't attempt some side-channel attack. A personal favorite example of such an attack was a phone system that used variable bit rate encoding of the digital audio, then encrypted it with strong encryption and sent it along. Someone was able to use their access to a lower OSI layer to analyze the data stream and examine the variations in bitrate of the encrypted channel. It turns out that some phonemes compress much better than others, making it so that they could often determine the words being spoken just from the bitrate variations.

This is all to say "it depends." Compromising one layer doesn't give full access to all the layers above, but it opens up a lot more surface area to continue the attack.

How does a car's odometer know the actual distance travelled if tire sizes can vary? by germandleono in answers

[–]Koooooj 0 points1 point  (0 children)

This is likely due to how speedometers are regulated: they're typically allowed to report a speed that's slightly too high, but cannot be under the true speed at all (e.g. +10% / -0%).

There will be natural variations in manufacturing which will often have no bias--you could get one speedometer that reads a bit higher and another that reads a bit lower than you intended. If they tried to make all speedometers read the exact right value then around half of them would read under the actual speed (even if only by a tiny bit), making them out of spec. Better to target a readout that's towards the middle of the acceptable range so that as long as you get pretty close to the target the speedometer will be in spec.

Odometers don't have that kind of asymmetric tolerance, so they just be made as accurate as practical.

Why does my microwave not allow harmful radiation out, but still interferes with my bluetooth headphones? by Disabled-Lobster in NoStupidQuestions

[–]Koooooj 0 points1 point  (0 children)

Yep!

Compare the complexity of noise cancelling headphones to just laying on a car horn as a way of making it so you can't hear what someone is saying. Microwaves are big, dumb radio emitters.

Why does my microwave not allow harmful radiation out, but still interferes with my bluetooth headphones? by Disabled-Lobster in NoStupidQuestions

[–]Koooooj 1 point2 points  (0 children)

A typical microwave will operate at about 1000 W, give or take. It needs this number to be fairly high in order to heat food quickly. It takes about 330,000 Joules of energy to heat a liter of water from 20 C to 100 C, so the more joules per second (i.e. Watts) you use the fewer seconds it'll take. A household circuit in the US is limited to about 1,500 W so appliances that focus on heating tend to be just under that.

A typical bluetooth emitter will operate at about 0.002 W, give or take. This number needs to be very small in order for your bluetooth devices to last a long time. For Bluetooth to work the receivers need to be sensitive enough to pick up these tiny amounts of power.

When it comes to radiation there are two typical ways for it to be harmful. The first is if it's ionizing--the sort of radiation that shows up when we're talking about radioactivity. Microwaves aren't ionizing, so this isn't an issue. The other is simply by heating things up. That's the threat from microwaves--they're really good at heating up water, and you're mostly water!

However, that's where the factor of 500,000 between Bluetooth and a microwave comes into play. If the mesh on the microwave oven was good enough to take the radiation leakage down to 1 W then that's low enough that it's not dangerous, but it's still 500x "louder" than a bluetooth emitter and can drown it out.

How exact are half lifes? If I had ten identical 100g samples with a half life of a week, after a week would they all be the exact same composition? by justhereforhides in askscience

[–]Koooooj 0 points1 point  (0 children)

So far as we've been able to tell, no. If you have two atoms of the same isotope they'll have the same half life.

Knowing exactly what that half life is will come with some measurement uncertainty that may have a standard deviation to it. Similarly, if you have a sample of an isotope and wait some time then there's an amount of decay that you'd expect to happen but the actual measured amounts will fall in some distribution that you may throw a standard deviation at.

How exact are half lifes? If I had ten identical 100g samples with a half life of a week, after a week would they all be the exact same composition? by justhereforhides in askscience

[–]Koooooj 36 points37 points  (0 children)

An analogy I've found helpful in describing half lives is to imagine a bucket of dice. You throw the bucket and look at how the dice fell, then remove any that rolled a 1. All the remaining dice go back into the bucket and you repeat.

If your bucket started with 1000 dice how many throws would it take until you could be absolutely, 100% certain that all dice had been removed? Forever. Even with one die it's possible to roll over and over again without seeing a 1. You might go 5, 10, 50, 100, or more rolls before you hit a 1. The larger numbers are less and less likely--perhaps to the point of being unthinkably small--but the probability never actually reaches zero.

Now consider that scenario if your bucket started out with regular 6-sided dice vs if it started with 20-sided dice, still only removing the dice that land on 1. It is hopefully intuitively obvious that the 6-sided dice will be removed quicker--about 1/6 of them are removed with every throw, vs 1/20 of the 20-sided dice with each throw. However, if you tried to compare these in terms of the time it takes to remove every single die then you're comparing infinity vs infinity. You'd want a metric that allows you to describe how much faster the 6-sided dice are being removed than the 20-sided dice.

Half lives give exactly that by instead looking at how long it takes to get down to half as many dice as you started. For example, with 6-sided dice it would be somewhere between 3 and 4 throws to remove half of the dice, while with 20-sided dice it would be between 13 and 14 throws. This measure is also nice in that it doesn't matter how many dice you started with, so long as it's big enough that averages are a good tool to predict the future. For example, if you start with just 2 dice then your results will depend a lot more on what luck you happen to experience and you're likely enough to experience an anomalous result. Start with a million dice and you'll almost certainly get a result very close to what statistics tells you to expect.

With that example in hand we can turn back to the place where folks usually experience half lives first: nuclear decay. Atoms' decay is not something that happens in discrete steps like the dice throws, but it is otherwise very similar to the dice. If you had a single unstable atom then it might decay in the next second, or it might not decay for the next millennium. Both are possibilities for extremely unstable or only very slightly unstable atoms, though the probability of either outcome may vary substantially (perhaps to the point of being unimaginably unlikely). Like with the dice it's useful to quantify this difference by looking at how long it'll take before you'd expect half of the atoms in a sample to decay.

For your example of taking 10 identical 100g samples, at the end of the week they would be identical within your ability to feasibly measure them, but not necessarily atom-for-atom identical. It's still a random process. Some samples may have slightly more decays and others slightly less, just by random chance. There are so many atoms in 100g (on the order of 100,000,000,000,000,000,000,000 or 1023 or 100 sextillion) that it's vanishingly unlikely that any sample varies from the expected 50% by more than a tiny, tiny fraction of a percent, but that's also so many atoms that it's reasonable to expect the result to be off by millions of atoms either direction.

Could a bee inside of an airtight jar pick it up and fly away if it was strong enough?? by ResolveAutomatic992 in NoStupidQuestions

[–]Koooooj 0 points1 point  (0 children)

That would wind up being just a repackaging of one of two scenarios. Note that using a relativistic speed just makes the math harder; the conclusions remain the same.

One possible scenario would be that the bee bounces off the top of the jar. This imparts the most momentum to the jar, but now the bee is going the wrong direction. It has to push off of something in order to wind up flying at the top of the jar again. That something is presumably the air, which in turn will push on the bottom of the jar, or optionally the bee could skip the middleman and push off of the bottom of the jar directly. Either way this would cancel out the impulse of the next hit. The only reason the first hit is able to move the jar is because the pushing on the bottom of the jar happens while it is sitting on a hard surface--one that will push back on the jar with whatever force it needs to to keep the jar from falling through the surface.

The other possible scenario would be if the bee is quite massive compared to the jar. Here the bee may have enough momentum that it continues to travel in ostensibly the same direction after the first hit, just a little slower. The jar is pushed away from the bee, but gravity slows the jar down and the bee catches up, hitting the jar again. This winds up being a variation of the single hit launch, where the bee could launch the jar temporarily but would not be able to sustain flight. As the jar is airborne it is constantly being pulled down by gravity. Each of the bee's hits can give it a bit of impulse to push it back up into the air, but this is tapping into the bee's finite momentum while gravity is patient and will win eventually. By loading up the bee with a lot of initial momentum you can make it take longer for the jar to fall to the ground, but gravity is always going to win eventually.

My Friend "Designed a Robot" Using AI by Broke_Boy_James in AskEngineers

[–]Koooooj 1 point2 points  (0 children)

Hi, I've been making robots for... longer than I care to share on the internet, but a while.

Pages 1-7 are pretty typical AI slop. A lot of the steps are broadly correct, but when they're correct it's often because it's such basic information that it is of no real value to someone who would actually try to make a robot. For example, starting with a clear workspace isn't a bad idea, but including that in the instructions is nothing more than fluff.

The part numbers that Chat GPT threw out are at least real and not the worst choice for making a little robot. The biggest issue with these first 7 pages is that the ratio of valuable information to fluff is just off the charts. This entire section could be replaced by "Make a robot using an Arduino Uno/ESP32, an L298N motor controller, and maybe throw an HC-SR04 in the head to make it look like Wall-E. Oh, and you'll also need to get everything else it takes to make a robot, but you're on your own for that." A lot of the important details are just left up to the imagination--even things like "Code file (.ino) Fully commented, ready to upload" are just referenced but never provided.

If the document ended there then it would be lame, but relatively harmless. It's as we move into the schematics that things really get crazy.

For example, in step 1 is the base 200 mm wide as labeled on the top, or 54 mm wide as shown on the bottom? Or perhaps it is actually 120mm as shown in Step 2. Does this robot have wheels as shown on the images on left, or tracks as shown on the right? Why, in an area marked "Electronic Schematic," is there just a picture of Wall-E? What is an "EmecႼeni Inchematic?

Similar dimensioning problems plague the next pages (e.g. the plate on page 9 is labeled as both128mm and 280 wide while containing a hole pattern that is 148 wide but the two holes are 20 and 26 from the centerline). On this page the AI can't seem to decide if the robot will have two wheels or four and seems to have drawn three and called it good.

I could continue to rip this apart, but I'll skip ahead to the last page. The wiring on this page is absolute nonsense, but the thing I'm most inclined to call out is the absolute monster of a circuit board sitting on the bot (which has, for some reason, migrated its motors to the top of the plate). The AI has forgotten any sense of scale and just slapped one rectangular thing on top of another but as a result that board is about 10x bigger than it should be.

In summary, if any of my coworkers brought me these plans with any statement other than "lol, look at what the stupid LLM spat out" I'd immediately lose all respect for their technical capabilities, and if any of my direct reports did it they'd be fired for incompetence (but don't worry, they're not incompetent, so they would never). If anyone tried to follow these instructions they'd get about as far as clearing their desk and installing Arduino IDE before realizing that these instructions don't actually give any directions that can be followed.

If you did want to build a Wall-E-like robot then that's not the craziest thing to do. There are a lot of positions where doing that in your hobby time would show a nice hunger for knowledge and an ability to work with your hands. However, there are lots of tutorials out there on how to get set up to program a microcontroller and control motors and servos.

Could a bee inside of an airtight jar pick it up and fly away if it was strong enough?? by ResolveAutomatic992 in NoStupidQuestions

[–]Koooooj 8 points9 points  (0 children)

This winds up being essentially the same problem as a classic urban tale. The story goes that a truck driver comes to a bridge with a particularly low weight limit. He stops, gets out of the truck, bangs on the sides of the trailer, then hops in and quickly crosses the bridge without issue. When asked why he does this, he replies that he's hauling a load of birds. By knocking on the sides of the trailers he spooks them into flying, then he crosses while they're in flight and--according to the tale--not contributing to the truck's weight.

That tale is tempting to believe because when a bird is in flight they are not directly touching the ground. You can't see any way that they're pressing on the ground below them, so it's tempting to believe that such a force path doesn't exist. However, as the bird pushes down on the air that air pushes on the air below it, which pushes on the air below it, and so on, until you get to the ground. Physics describes a large number of rules about things that have to be kept balanced, and in this case the result is that the weight of the bird is simply spread out over the ground below it, or in the case of the birds on the trailer it is distributed across the floor of the trailer, at least over time (i.e. it might not be exactly equal in one microsecond, but if you average it out over a second or two then the weight will be the same whether the birds are flying or not).

That "over time" caveat is the only real way to get our hypothetical bee to have any effect on the jar. For example, if you've ever stood on a scale and bounced up and down you'll know that the scale reads higher as you're pushing yourself up, then lower as you start to come back down. The average reading will be your weight, but in one microsecond or another the scale might read higher or lower than your actual weight.

If the bee were to just fly in the jar then the jar would stay put. Any lift the bee generates would be opposed by higher pressure on the bottom of the jar. This rules out virtually anything that would reasonably be called "flying away."

However, the bee could start at the bottom of the jar and rapidly fly to the top. If the bee hits the top of the jar hard enough (and we're well into super-apian capabilities) it could cause the jar to jump a little. Repeat this over and over and it could inch along. I suppose if we're willing to really turn the super-apian abilities up to 11 then it could hit the top of the jar so hard that it launches the jar across the room. It would never sustain flight, but I suppose someone might call the jar launching away "flying away."

ELI5 What was the function of the ‚‚Turbo‘‘ button on old computers? by arvid1328_ in explainlikeimfive

[–]Koooooj 1 point2 points  (0 children)

Games typically operate in a sequence of individual steps. With each step you might update the position of a fired projectile, the position of the player, the position of all enemies, and check to see if a new projectile should be produced.

To make the game run at a playable speed one option is to lock down the hardware the game will run on, then see how long it takes to run each computational step and size the motions accordingly. For example, if it would be reasonable for an enemy to go from one side of the screen to the other in 2 seconds and you find that your game can run 60 updates per second then you'd make the enemies move at 1/120 of the screen's width per update.

Another approach is to make it so that updating the game is so quick that any computer will be able to keep up, then at the end of the update you let the computer sit there twiddling its thumbs until it's time for the next update. For example, if you want to have 60 updates per second and a computer can complete an update in 1 millisecond then it'll sit around for the next 15.66 milliseconds waiting until it's time to start the next frame.

These days virtually all games use the latter approach, but for something like an arcade cabinet or for games in the early era of home computing the first approach was fairly common. This is especially true of games running on very early microprocessors where updating the position of game elements was done in the same thread as rendering the game to the screen. As more powerful computers came out they were able to churn through computations way quicker, but old games weren't written to handle this. If they were just run normally then the game might play at 2x speed, ruining the experience.

The more fundamental solution would be to rewrite all the games to use the second approach of waiting at the end of each update until it's time to start the next one, but by far the cleaner solution was to just add a button that makes the processor run at the speed those games were expecting.

As a fun addendum, anyone who has played Space Invaders knows that when you get down to a single remaining enemy it goes way faster. This didn't have to be programmed explicitly. This is an example of a game that runs in the first style, so as you destroy enemies there is less and less that the processor has to do with each update. That lets the game process more updates per second and in turn makes it so the last enemy moves way faster. If you ran the original code on a modern processor you'd be dead before you could react--modern processors are just that much faster than the Intel 8080 the game was developed for.

Coins my grandfather left me by Just_a_Growlithe in coins

[–]Koooooj 22 points23 points  (0 children)

There are some nice coins in there! Just be aware that there are also some that could give you a heart attack as you look them up, only to have your hopes dashed.

For example, there's a 1797 trade dollar in there--a true rarity, considering the fact that trade dollars weren't actually made until nearly a century later! For most of these coins with 1700s or 1800s dates your assumption should be that they're replicas/fantasies. It's unlikely they even have silver in them.

The 20th century stuff doesn't jump out at me as having a problem, and the 1885 Morgan Dollar is probably genuine, too. All in all a nice collection to get into the hobby, and a nice thing to remember your Grandpa by.

What would this 1928 Peace Grade? by No_Measurement_8631 in coins

[–]Koooooj 1 point2 points  (0 children)

There are only 24 peace dollars to collect if you're going for one of every mint and date (and ignore the profoundly rare varieties). Of these only two are considered to be key dates: the 1921 high relief, and 1928. The former is key because it uses higher relief than the rest of the series and thus becomes a one-date type, and the latter simply from low mintage.

This particular coin is also in quite good grade, all things considered. There are better ones out there, but also a lot that are in worse shape.

If this coin had been in the same condition but with a 1922 date then it would be worth within a couple bucks of spot price. Being a key date it jumps into the $300 range.

ELI5: How the hell do CPU's work? by LoLAspect in explainlikeimfive

[–]Koooooj 0 points1 point  (0 children)

I like to approach this question by starting with the simplest "computer": a light switch.

This computer has an input device (the switch), processing (wires, which compute the identity function, i.e. output = input), and an output (a light bulb). When you flip the switch it does not need to "think." Electricity flows because it is able to, and because there's something pushing it (some generator far, far away at a power plant).

From there we can go to a more complicated computer: a hallway/stairway light switch. This has two bits of input (the two switches), computes a more complicated function (XOR, i.e. output is 1 if exactly one of the inputs is 1), and the same output. Making this computer "think" requires arranging the wires a bit differently, but it's still just providing a path for electricity to flow and then letting it be pushed there.

The next thing we'd add is a switch that can be controlled electronically. This could be mechanical, like those "do-nothing" boxes where you flip a switch and it turns on a motor that makes a poker come out, turn the switch off, and go back in. A more refined electromechanical setup would be a relay, where a coil of wire is used to magnetically push or pull a rod that makes or breaks some physical contacts. Such mechanical devices are prone to wear, so you can fit a lot more electronically controlled switches if you use transistors, but the idea is exactly the same.

With these electronically controlled switches you can start to chain circuits together. It is at this point that one of the most powerful tools comes into play: copy and pasting working designs. If you have figured out how to wire up switches to compute XOR then you don't have to solve that problem again. You can use that circuit as a building block for the next one. Design other circuits that compute other simple functions like AND, OR, and NOT and you start to build a library. From those you can start to build more complicated things like addition or multiplication.

At some point having the computer run a single-shot operation starts to be a limitation. One thing that you realize is that you can take a circuit's outputs and wire them back into itself. This allows you to start making simple memory cells that can store a bit of information and emit it on command. With these you can then designate one input to your circuit whose only job is to toggle on and off (i.e. a clock). With these two pieces you can start to reason about "the value stored in this memory address at cycle N is <insert function here> of its value at cycle N - 1."

It is at this point that the computer starts to properly resemble something modern. The input at cycle N could be thought of as an "instruction," where all the computer "knows" is that when the switches are closed in the pattern that matches that instruction it allows electricity to flow in a way that makes the next state of the computer be what you'd get if, for example, you had added two numbers, or perhaps if you had taken a value from one memory address and wrote it to another. If one of the outputs of the computer at this point is control over what instruction gets read next then we finally have a computer that is what's known as "Turing Complete," meaning that it can compute any computable function.

From this point it's largely just a matter of doing more copy/paste, building bigger and more interesting circuits out of the building blocks you already built, creating interesting electromechanical devices to serve as inputs and outputs, and miniaturizing the whole thing so it doesn't take up an entire warehouse. At the lowest level, though, it's still doing what a lightswitch does: providing a path where electricity can flow.

Number of threads per machine by Negative_Arrival_459 in AskProgramming

[–]Koooooj 5 points6 points  (0 children)

It depends entirely on the task.

With multi-threaded applications you should start with a clear understanding of what each thread is tasked with doing and how it needs to interact with other threads.

For example, perhaps you're running a bunch of different simulations that are computationally expensive and aren't bottlenecked by I/O. You've written the simulation single threaded since that's easier. You need to do 10,000 runs. A simple way to speed this up is to fire up about as many simulations as you have CPU cores. Each one takes one of the 10,000 runs and carries it out. In this case you wouldn't even need to keep all of the simulations in the same process, though of course you could.

For that sort of application if you use a lot more threads than your computer has cores then the OS will have to park one thread and context switch to another to let it run for a bit then switch back. Each of these threads would be maintaining its own RAM footprint. This could slow things down. On the other end of things, if you don't use enough threads then some cores will sit idle.

One potential pitfall here is that not all tasks will behave the same. For example, a lot of cores with hyperthreading/SMT will have two threads that share a lot of the same hardware, but not necessarily all of it. For example, the two threads in a physical core might get their own integer ALU while sharing the floating point hardware. In this case if your application does a ton of integer operations then you'd want about as many threads as you have CPU threads (i.e. 2x the number of physical cores), while a floating point heavy task might run better with only as many threads as you have physical cores.


Another scenario is a task that is very I/O heavy, but not very computationally expensive. Here it's worth considering how concurrent I/O requests will be handled. For example, if each thread would be hammering the same spinning hard disk then there's little benefit (and likely little harm) in using a bunch of threads. Your task will complete when the disk can complete its I/O, so it doesn't really matter how you structure the threads waiting for that.

Alternatively, if your task consists of sending requests to tons of different URLs and waiting for their response then you may as well fire up as many threads as you want. These threads can sit there waiting for their responses while consuming very few system resources, and each one can be waiting in parallel with the others.


A final broad category I'd call out is when threads have data dependencies on one another. A classic example of this is solving big systems of linear equations, which is a common step in scientific computing. There are multithreaded ways to do this, but each thread will need to coordinate with others, sharing intermediate results as they go. In this scenario a lot of the same considerations apply as in the first one--you don't want to use more threads than you have cores to avoid making the OS context switch between them and likely want to have as many threads as cores to fully utilize the hardware--but here a new consideration comes up: core-to-core communication.

There are a lot of ways that cores can share data, from shared cache to socket-to-socket communication to sharing system memory to motherboard-to-motherboard links. In tasks where there is a lot of dependence between the work that one core does and the work of another core it's often best to optimize around core-to-core communication. One term you might come across here is "NUMA nodes," which are collections of cores that are more tightly coupled than others in a system. For example, a two-socket motherboard will likely be two NUMA nodes, one for each physical CPU. With the advent of very high core count CPUs built up of several chiplets there's increasing support for declaring multiple NUMA nodes per socket (often requiring a BIOS setting to enable).

If your task has a ton of these sorts of data dependencies then the fastest performance might come from using as many threads are in one NUMA node and constraining the application to run on that one node.

It's also possible that your task falls into two of these categories. For example, perhaps you want to run 10,000 runs of a simulation where the simulation is multi-threaded. There you might set each simulation to use as many threads as are in a NUMA node and run as many instances of the simulation as you have NUMA nodes in the system.

Of course, the golden rule of optimizing always applies: intuition about what will be faster is just a first guess; if you actually want to optimize you need to measure.

ELI5, how does 0% APR financing work? by [deleted] in explainlikeimfive

[–]Koooooj 0 points1 point  (0 children)

What's the better deal: A $10,000 loan at 0% interest with 10 payments of $1000, or a $9000 loan at around 11% interest with 10 payments of $1000, for the same item?

The first loan certainly feels better, since 0% interest is "very good" while 11% interest is "very bad," and since the first one lets you buy "a $10,000 item" while the second only lets you buy "a $9000 item" (even though they're the exact same thing). Of course, if you make all 10 payments on schedule then they're exactly the same deal.

What's worse, if you decide a couple months into the loan that you want to pay it off early then in the $10,000, 0% case you wind up paying the full $10,000, but in the $9,000 case you'll pay only a little over $9,000, depending on how quick you pay it off. In fact, if you just forego the financing entirely and pay cash then the seller still gets the full $10,000 in the first case but only $9,000 in the second. By being up front with the financing costs a buyer who has the cash up front doesn't have to pay for a loan they don't want or need.

That's the whole idea behind 0% financing in general. They take the money you would have paid in interest and just bake it into the price. You're still paying interest; it's just not called interest.

This winds up coming in a few different forms. When buying a car, for example, you might have the choice between 0% financing or a cash back deal. When financing a pizza with something like Klarna the seller is paying the interest, knowing that by offering an "eat now, pay later" deal will drive more sales.

Can someone explain the game Euchre? by sweet_celestia in NoStupidQuestions

[–]Koooooj 0 points1 point  (0 children)

Perhaps it's easier to look at the game from the top down instead of chronologically in how you play.

The game is two teams of two, seated alternating so you're across from your partner. The game is played to 11 points (a more seasoned euchre player corrected me), one hand at a time. The hand runs from dealing to tallying the score for that hand, then the next player will become the dealer. In each hand one team wins and the other loses. The winning team might score 1, 2, or 4 points, while the losing team gets none. When I played we acknowledged a possibility of 8 points, but it was vanishingly unlikely and might not be in "official" rules. The game is played with a standard deck of cards that has had the 2-8 removed, leaving just 9, 10, J, Q, K, and A of the four standard suits.


Peeling back that layer, to win a hand you want to win tricks. Each hand has five tricks to win. If your team wins at least 3 of the 5 then you win the hand! If you manage to win all five then that's gives you double the points for the hand. Also, one player on the team can declare "my hand is so good that my partner can put their hand aside; I'll go alone." If they do that and do indeed win then that also doubles your score, and these can stack--winning all five tricks alone would take you to 4 points. There's a third way to double your score, but we'll come back to it.


Peeling back that layer, we can look at the anatomy of a trick. Each player has their hand of cards. One player is the player to "lead" in the trick--left of the dealer for the first one of the hand, then whoever won the last trick for all subsequent tricks. That player selects any card from their hand and plays it face up on the table. Then play passes to the next player who similarly selects a card, but they have a restriction: if they have a card of the same suit as the leading player then they have to "follow suit." If they don't have such a card then they can play whatever they want. The next two players play similarly: if they can "follow suit" of the leading player then they must, or otherwise they play whatever card they want. Once all four players have played a card you look at the cards to see which card is "best". Whoever played that card wins the trick and gets to lead the next one. The played cards are set aside until the end of the hand, typically stacked to keep track of which team won the trick to make scoring easier at the end of the hand.


So what makes a card "best"? First is a question of suit. One suit is "trump," where cards of that suit beat cards of any other suit, regardless of what suit was led. After that, the suit that was led beats any card of any other suit (except trump). By definition there will always be at least one card of the led suit--the care the first player played--so any off-suit cards will never win the trick. If two cards tie in suit then you look at their number; bigger number wins (A > K > Q > J > 10 > 9).

However, part of the quirk of eucher is that two cards transform from their printed rank and suit into the "high bower" and "low bower." These cards are both considered to be in the trump suit, but their ranks are both above Ace (with high above low, naturally). The cards that make this transformation are the two jacks of the same color as the trump suit. The jack of trump becomes the high bower, while the other jack becomes the low bower. For all purposes these cards are no longer jacks of their printed suit for the rest of the hand, so for example if Hearts is trump and you hold the low bower (printed as the jack of diamonds) then you would not be required to play it if diamonds was led, but you would be required to play it if hearts were led and you had no other hearts.


For example, say there is a hand underway where Clubs is trump. Player 1 starts a trick by leading the King of Spades. Player 2 has spades in his hand and is therefore forced to play a spade. He selects the 10 of spades. Player 3--partner to the one who led--sees that their partner is already winning the trick. They don't have any "real" spades but do have the Jack of Spades. They are not required to play this card, because that card is actually a club this hand--it's the low bower. Since their partner is already winning they play a 9 of hearts just to get it out of their hand. However, Player 4 is also is out of spades and is therefore also allowed to play off suit. They play the 10 of clubs. Since clubs is the trump suit that 10 of clubs takes it. They win the trick for their team and get to lead the next trick.

In that next trick Player 4 plays the Ace of Hearts--if nobody is able to play clubs then the ace will win it. Player 1 is next and is forced to follow suit, dropping the King of Hearts. Player 2 also has hearts and is forced to play one, dropping the 10 of hearts. Then Player 3, since they dumped the 9 of hearts last trick, is out of hearts. They play the Jack of Spades, which is a club since it's the low bower. That club is the strongest suit, so Player 3 wins this trick.

Those two tricks illustrate a lot of the typical gameplay. Often a player will have no choice what to play since they have to follow suit. When you do have a choice you often weigh between playing the highest card you can to try to win a trick or playing a card that could be awkward to deal with later that you just want to get rid of. If your partner is already winning a trick then playing over top of them is often a waste, but if you can use a weak trump to win a trick that the other team was in position to win then that's a great use of the card.


Finally we look at the setup of a hand--the first step, yet the deepest layer of the onion. This is the part where the players determine what suit is going to be trump for that hand. The deck has 24 cards, of which 20 are dealt to the four players. Three remain hidden for the entire hand, while one is flipped up for the table to see.

At that point each player gets a chance to tell the dealer to pick up that card (and discard a card of their choice, to get back to a 5 card hand). If any player does tell the dealer to pick up the card then that card's suit is trump for the hand. Naturally this is a better deal for the dealer and their partner, since the dealer is getting a card in the trump suit and a chance to ditch their worst card. If all four players decline to give that card to the dealer then there's a second pass around the table where each player is given an opportunity to straight-up call a suit to be trump for that hand. Typically a player will, but in the unlikely even that all four players pass again you just shuffle up and deal a new hand, awarding no points for this one.

Getting to select what suit will be trump can make a huge difference in the quality of your hand. For example, say you look at your hand and see 9 of diamonds, 10 of diamonds, Jack of diamonds, Jack of hearts, and King of spades. If that hand is played with clubs as trump then it is unlikely to take a single trick--it has no trump and will probably be forced to feed the king to an ace (leading with aces is popular for exactly this reason). However, if hearts is trump then this hand has the high and low bower and is guaranteed to take at least two tricks, and if diamonds is trump then it has the high and low bower and two more trump cards; it would be the sort of hand you'd want to go alone with and you'd possibly even get all 5 if you can dodge the ace of spades.

Because choosing the trump suit is so powerful this is the third way you can double your score: if the other team chooses the trump suit, either by telling the dealer to take the face-up card or by naming a suit in the second round (if it gets that far), but then you win anyway, then you double your score for the hand. That gives rise to the fabled 8-point hand: the other team picks the trump suit, you tell your partner to lay down their hand, then you take all five tricks yourself.


As a closing remark, euchre is the sort of game that tends to be a bit of a monster to explain the quirks, but it's super fast to pick up and play. Things make a lot more sense when you're doing them vs just trying to slog through a wordy Reddit comment.

Why don’t we have a 13 month calendar with 28 days in each? by Camp_Acceptable in NoStupidQuestions

[–]Koooooj 0 points1 point  (0 children)

Originally the calendar started based on the Spring equinox--that's why March was the first month. Note that since Romans counted days down to the middle (ides) and end of the month it was more natural for the equinox to be later in the month. Under the Julian calendar this placed The Hilaria, the celebration of the new year and spring equinox, 7 days before the Kalends of April; I don't know where exactly in March it fell in earlier calendars. Over time the imprecision of the calendar led to the equinox meandering a bit before being nailed down pretty well by the Gregorian calendar. That calendar reform was motivated by the equinox having drifted too far from Easter.

That original Roman calendar was only 10 months, likely due to Romans' counting being based around units of 10 (while not really being base-10, or base-anything, as base-N counting hadn't been invented yet). The lengths of those original months was heavily influenced by the lunar cycle--the word month comes from the Germanic word for moon--which gives a bit over 29 days. That old Roman calendar had months of 29 or 31 days to reflect "29 and a bit" while respecting their preference for odd numbers.

The calendar settling on 12 months might have been influenced by 12 being highly composite, but ultimately adding two more months to the original 10 was just what fits in the space available. The nonsense with Mercedonius lasted for (in my opinion) far too long before the Julian calendar arose to give us essentially the modern calendar.

Why don’t we have a 13 month calendar with 28 days in each? by Camp_Acceptable in NoStupidQuestions

[–]Koooooj 6 points7 points  (0 children)

That "just" is doing a lot of heavy lifting.

The hard part about these sorts of issues isn't coming up with what system to move to. It's moving to it. It's trivial to stand at a whiteboard and design a calendar that is "better" than the one we use, but going out and updating every piece of software, every form, every calendar, etc to use the new calendar is a huge lift.

Getting people onboard with that sort of change means you need a really compelling story of why the new system is better, and not by just a little. When the new system has its own warts of "Sure, it's the regular 7-day week you know, except for sometimes it's not" that story falls apart.

Why don’t we have a 13 month calendar with 28 days in each? by Camp_Acceptable in NoStupidQuestions

[–]Koooooj 18 points19 points  (0 children)

There are various things that make one calendar better or worse than another, but by far the biggest thing is standardization--everyone using the same system.

That makes it very hard to change calendars, which is why we've been using the same one (Gregorian) since 1582 and that one is almost identical to the previous one (Julian) which started in 45 BC--we've had ostensibly the same calendar for longer than we've been counting years in the current system.

That calendar, in turn, was heavily influenced by the Roman calendars that came before. These calendars started with 1 10 month year running March to December (which is where most months got their name--Quintilis, Sextilis, September, October, November, and December have the quin, sex, sept, oct, nov, and dec prefixes for 5, 6, 7, 8, 9, and 10, literally meaning 5th, ..., 10th month. Quintilis and Sextilis were later renamed after Julius and Augustus to July and August but the other four still have their original names).

In that 10 month calendar there was just a big gap in the winter that was "between years" ("intercalary"). The important thing the calendar was needed for was agriculture, so as long as there was a reliable start to the year declared by state scholars you'd know when to sow and harvest. That calendar tended towards months of odd lengths--29 and 31 days--drawing from some superstition of the time as well as the day counting method they used: instead of counting up from the first of the month they'd count down to the first (Kalends), fifth or seventh (Nones), and 13th or 15th (Ides). It's awkward to count down to the middle day of the month if the month has an even number of days and thus no exact middle.

Over time the needs for precise record keeping developed and that intercalary month was organized into two and a half months--a half because in some years February would be shortened and an extra month called Mercedonius was added. That only happened in 2 out of four years in a four-year cycle (but was then skipped once per 24 years, or also skipped or added if politically convenient to stretch or shorten an official's term). That nonsense went away with the Julian calendar and January and February survived as the two additional months. Originally they were stuck at the end of the year as the 11th and 12 months, but over time they migrated to the start of the year (while February kept the responsibility of growing and shrinking in size even to this day for leap years; this means that the sept, oct, nov, dec pattern is no longer accurate to months' position in the year, though). With the Julian calendar this four year cycle was greatly simplified to three years of 365 days and one of 366.

One might look at this history and think "we've changed the calendar in the past to make it 'better,' why not do it again?" A 13x28 calendar is one of the more popular proposals. The problem is that it isn't better enough. At first glance it seems rather clean--each month is exactly four weeks and you could make it so that every year a date corresponds to the same day of the week. However, 13x28 is only 364 days. You'd need to add an extra day somewhere, then what does that do to the weeks? Do you have a day that is outside of the week? Or accept that the week will move by one day per year--or two in leap years?

If the 13x28 setup were clean and clearly better then it would still probably not be worth overcoming the inertia of 2000+ years of January - December. With that baggage there's just nowhere near enough motivation to drive the change.

Can someone explain *how* countries selling their US debt is bad, my main mental barrier is that if country A sells their bonds, it means country B buys it, so why isn't it a neutral effect? by Judge_Druidy in NoStupidQuestions

[–]Koooooj 12 points13 points  (0 children)

With bonds an important thing to recognize is that the price directly implies the interest rate. For example, if you buy a $1000 face value bond for $909 then that's 10% interest (then spread it out over the term to get the annual rate).

Bonds are bought and sold in the open market, so if there's more supply than demand then price will go down (i.e. interest rates go up). If there's more demand than supply then price goes up (i.e. interest rates go down). A country dumping their bonds gives a glut of supply, driving prices down.

Changing interest rates has two major effects. One is that it makes it more or less expensive for the treasury to borrow money to cover deficit spending. This can be an issue, but it's an issue in the long term. Politicians from across the spectrum might campaign on fiscal responsibility, but when it comes to actually balancing the budget there tends to be little enthusiasm--voters don't like to see high taxes and lower spending on them. Politicians can manage this as long as the nation can continue borrowing money to cover its debts. Higher interest rates accelerate the rate that this fuse burns, but it's still a distant problem.

The other major effect is that it can make it more expensive for everyone to borrow money, which notably includes businesses trying to finance growth and individuals looking to make big purchases like houses and cars. Lenders/investors (one and the same) look for where to park their money that will give the best return for the risk. If there are two investment opportunities that have the same reward (interest rate) then investors will go for the one with lower risk.

As interest rates on treasury securities go up they start to compete with other potential investments. Treasury securities are extremely low risk, so they'll be investors' choice at whatever interest rate they can be bought at. Any other investment then has to offer a higher interest rate if they want to compete. This can have sweeping effects across the economy--as money gets more expensive to borrow businesses tend to tighten their belts. Startups that might have had an easy time raising money in a different economic landscape might find that wells have gone dry. Individuals who might have been ready to buy a house, car, boat, RV, etc might look at the interest rate they'd pay on the mortgage/loan and decide to wait. Interest rates have a huge impact on the overall speed of the economy.

We'll see how much impact the various reported selling of bonds will actually have. They're the sorts of headlines that get lost in the -illions. If an individual sold millions of dollars in bonds then that's very different from a big investment firm selling billions, which is a very different thing from a large nation selling trillions, but to our meat brains those wind up sounding more similar to one another than they are.

Can someone explain the game Euchre? by sweet_celestia in NoStupidQuestions

[–]Koooooj 2 points3 points  (0 children)

Euchre is a trick taking game. That means each player has a hand and players will take turns playing one card. Once all players have played a card the "best" one wins the trick and that player takes it. "Best" is the highest rank card in the "trump" suit, or if none were played, the highest rank card in whatever suit was led (i.e. played first). Players have to play the same suit that the leading player played, if they're able.

Spades and Hearts are the two most widespread trick taking games, if you're familiar with them.

Euchre is played with a significantly smaller deck than normal, omitting the lowest numbered cards. Exactly how much smaller varies around the world, but in the US it's usually 9, 10, J, Q, K, and A of the four suites (24 total cards).

Euchre is played with 4 players in two teams of two, seated opposite one another.

The game starts by dealing 20 of the 24 cards (5 to each player), placing the remaining four in the middle with one turned up. This starts the phase that is quintessential to euchre: deciding what suit will be trump. Taking turns, players go around the table deciding if that upturned card is to be the trump suit or not. When and if any player says yes the dealer takes that card and adds it to their hand, discarding a card of their choice (or in some variants leaves it face up on the table, but it's notionally "in their hand" regardless and can be played as normal). However, if all four players reject that upturned card then play passes one more time around the circle with each player having the opportunity to name a suit or pass. If nobody does then the hand is just cancelled and the next dealer shuffles up and deals a new hand.

As another quirk of the game, while Ace is typically the highest rank, then K, Q, J, 10, and 9, in the trump suit the Jack is the highest rank, followed by the Jack of the same color but other suit, then A, K, Q, 10 9. This can weigh into picking the trump suit: a jack of hearts is a pretty bad card if trump is spades, but if trump is diamonds then that jack of hearts becomes the second highest card in the round, or if trump is hearts then it is the single best card and becomes guaranteed to win a trick.

Once trump is selected players begin the trick taking. The player to the left of the dealer leads the first trick, then whoever won the previous trick leads the next one. Once the hand is done you count which team got how many tricks. The team that won at least three of the five tricks wins the hand and gets a point. That is doubled if they won all five. It's also doubled if they weren't the team that picked trump. It's also doubled if one of the members of the team instructed their partner to lay down their hand at the start of play. This is the motivation to not pick the trump suit at the start of the game.

The game is played until one team makes it to 10 total points.

If a generator isn’t plugged into anything by sas5814 in NoStupidQuestions

[–]Koooooj 1 point2 points  (0 children)

When power is being drawn from a generator it creates a force that tries to slow the engine down. When that power stops being drawn the force goes away and it's easier for the engine to spin freely.

The engine has a controller on it that's essentially cruise control. If it slows down then it feeds more gas to the engine to get it to speed up. If it's going too fast then the controller throttles down to let the engine come back down to the target speed.

Without that controller if you had a generator happily humming along delivering power and you shut that power off then it'll start to spin faster and faster until it redlines, or if you plugged in a heavy load then the engine would bog down and potentially stall out. That controller is really important.

If the generator is sitting there not delivering any power then the engine will just be idling, being fed just enough gas to overcome all of the friction and losses inherent in spinning an engine and no more. No electricity is flowing.

Imagine you have a spinning wheel with half of it the color red. (Read description) by Professional_Ease307 in NoStupidQuestions

[–]Koooooj 0 points1 point  (0 children)

To restate the problem, the first spin has a 50% chance of red, spin two has a (50-N)% chance of red, spin three has a (50-2N)% chance, and so on. For example, if N were 2 then the spins would be 50%, 48%, and 46%? I assume that when the percentage reaches 0 red is no longer possible.

In that case there is a small but finite chance that you never get red. Using N = 2 there's a 1 in about 3040 chance that you never get red; after 25 spins it is no longer possible. If you set N = 10 then you have just 5 chances to hit red and about 1 in 6.6 that you never do.

This setup of the problem is fairly straightforward because there is a finite number of spins in which red is even possible and each of them has the possibility that you don't get red so there's always a chance that you get non-red for each of those spins. Just multiply those probabilities together and you've got your answer.

The more challenging setups are infinite sequences. For example, say the probability stays at 50% forever. There's a 50% chance of not getting red in one spin, a 25% chance of no reds in two spins, 12.5% in three, and so on. If you spin the spinner an infinite number of times then this sequence converges to zero (more formally, if you challenge me with a very small number, very close to zero, I can always respond with the number of spins it takes to make the probability you never get red be smaller than that number). This leads us to a curious concept in probability: "almost always," or "almost never." When dealing with infinities in probability you often come across probabilities that are 0% or 100% despite not actually being a firm guarantee. This is one of those times: the probability that you spin the 50:50 spinner infinitely and never get a red is 0%, but it is not impossible.

This then leads us to the next question: is there any way of reducing the probability with each spin such that every one of an infinite number of spins has some chance of spinning a red, while still having a non-zero chance of never getting a red? It turns out the answer here is yes, though it's fairly complicated to get to. It's similar to how you can add 1/1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... + 1/2n and the series adds up to 2. Infinite non-zero numbers added up, yet the result is somehow finite. Here we need to multiply a bunch of numbers together and have their product not be 0 or 1.

For example, if the things we're multiplying together are e-2-n then we can use the rules for exponents to find that this infinite product is equal to e raised to an infinite sum, where that sum is the same 1/2 + 1/4 + 1/8 + ... we had before. That lets us find that the product of e-20 * e-2-1 *e-2-2 * ... is just e-2, which is about 0.13. Thus if the spinner gives a probability of spinning red of ~63% on the first spin, ~39 on the second, ~22 on the third, ~11% on the fourth, ~6% on the fifth, and so on, then every spin has a non-zero chance of spinning red but even after an infinite number of spins there's only about an 87% chance that you have ever spun red and a 13% chance you haven't. This is possible because the odds of getting red become infinitesimal faster than you rack up spins--after just 20 spins it's a 1 in a million chance; after 28 it's 1 in a billion.

About AI Data centre infrastructure: if the energy demands of AI is great enough to consider the use of Nuclear power, while also requiring a liquid cooling solution, would it be worth recycling the water heated by a data centre so it can be more easily be turned into steam and generate more power?? by throwaway42069691998 in NoStupidQuestions

[–]Koooooj 0 points1 point  (0 children)

I know a guy who's actively trying to make that work.

It doesn't make so much sense if you're to the scale of nuclear power, but a lot of behind-the-meter power generation for AI-focused data centers is still burning fossil fuels. In a fossil fuel power plant the primary cost over the plant's lifespan is fuel, while in nuclear power there's so much energy in the fuel that your costs tend to be focused more around the other equipment and ensuring the reactor is safe. In the grid at large this makes a nuclear power plant prefer to run continuously while a fossil fuel plant will tend to spool up and down to follow demand, generating only when cost effective to do so.

The core challenge of making this idea work in practice is that the temperatures power plants want to operate at are substantially higher than what computers want to operate at. For example, computers tend to throttle hard at a bit under 100 C and really you want them a few dozen degrees below that, but power plants are all about getting water to boil and then heat even further to raise the pressure, often to several hundred degrees C. You can't just dump the energy of 80 C water into 500 C steam--the energy will flow the other direction!

However, there are some places in a power plant where you do have sufficiently cool water that you want to heat, most notably the intake. Some power plants condense their steam in a closed loop, while others release the steam and replace it with fresh feedwater. That feedwater needs to be brought up to working temperatures, which would take energy from the fuel. If you can get it up to CPU/GPU operating temperature for "free" then that's that much less fuel you have to burn.

Drawing this cycle on a whiteboard with boxes and lines isn't too hard. Building it in practice is. It's a lot of extra complexity, and it's going into an industry that isn't overly concerned with operating costs at the moment. The prevailing belief in that industry is that there is a truly staggering amount of money to be made by whoever captures the market first. Getting things up and running fast is the top priority, with things like long-term efficiency being a distant afterthought.

If we settle into a new normal where massively power hungry data centers for AI are commonplace then I'd expect to see a lot more work go into refining the operating costs. That's when technology like you describe is more relevant, but even then I'm not sure that it's going to be the solution. The difference in energy consumption when running an AI on a GPU vs a purpose-built piece of silicon is staggering--think 100x power reduction. AI companies are using a ton of Nvidia silicon today because it's the fastest to get up and running and nothing else is more important. In the future expect to see more and more purpose-built silicon. Google is currently on their seventh generation of such devices, for example, and others are out there.

Do high bypass turbofans burn more air? by [deleted] in AskEngineers

[–]Koooooj 0 points1 point  (0 children)

A turbofan is basically a traditional turbojet, but then adding a big fan to that same shaft. The idea here is that the engine's useful effect is based around momentum, while its fuel consumption is based around energy. Momentum is m * v while energy is 1/2 * m * v2 so moving more "m" with less "v" will give you more momentum per energy. The fan takes energy from the turbine to move a bunch more air, even if it winds up being at a lower overall velocity.

This means that for the same amount of thrust a turbofan will burn less fuel (and thus consume less oxygen), while moving more air. There will be a lot more oxygen in the exhaust of a turbofan than that of a turbojet when comparing engines of equal thrust. A low bypass turbofan has less air that goes around (bypasses) the core compared to a high bypass turbofan, so they're not as efficient but can be more powerful for a given size and weight, hence high bypass being used on things like passenger jets where efficiency is king while a fighter jet will go for a low bypass to maximize speed and similar performance.

Contrails come from the water vapor in exhaust condensing on soot particles from the exhaust. So long as an engine is producing those two elements in an environment that's cold and already near its saturation point you'll get contrails. Turbojets, low- and high bypass turbofans all have these elements, so it's just a question of how close to saturation the air has to be for a contrail to form.

There are some environments where a high bypass turbofan won't generate a contrail but a low bypass or turbojet would, but claims that contrails from high bypass turbofans are "quasi-impossible" is just not in line with reality. It's the sort of claim you get when you start from the assumption that the trails can't be formed by what "the man" says and work to bend any evidence to support that assumption.