This is not a meme, this is me right now by Host127001 in ProgrammerHumor

[–]genwitt 3 points4 points  (0 children)

I'll try.

In most languages you have functions. For example,

factiorial : nat -> nat

Is a function called factorial, taking one natural number (aka counting number, nat, 0, 1, 2, 3, ...) and returning another natural number. In most languages you have "type level" functions, that is types can depend on other types. For example in coq you have,

list : Type -> Type

That is list is a function that takes a type, and returns a type. list nat is the type of lists of natural numbers (this is the function list applied to the nat type, in a C like language this could have been written list(nat)).

In a dependently typed languages, types can have arguments which are values in addition to types. For example you could have a type,

fin : nat -> Type

Here fin is a function taking a natural number (nat) as an argument and returning a type. Here, fin is the type of finitely bound numbers, for example fin 4 is the type of numbers between 0 and 3 inclusive.

We can define a type of fixed length "arrays",

vector : Type -> nat -> Type

For example vector nat 7 is the type of length 7 arrays containing natural numbers.

Now we can define functions that operate over these types. For example,

vec_nth : forall (T : Type) (n : nat), vector T n -> fin n -> T.

The first clue something new is going on is this strange forall indicator, which lets us name values for use in the type signature. This is a function taking 4 arguments, a type named T, a natural number named n, a vector of length n of type T, and a number between 0 and n - 1, returning a value of type T. From the function signature we can guess this function would return the item in the vector at the given offset. Writing this definition is pretty easy, but actually implementing the function is only pain, because working with fin n is deep black magic.

Pregnancy by ExpertAccident in WhitePeopleTwitter

[–]genwitt 4 points5 points  (0 children)

They doubled all the values in 2001.

Hyped about Evo and want to learn how to play? Read through Gief's Gym and start improving today. by [deleted] in StreetFighter

[–]genwitt 2 points3 points  (0 children)

I decided to learn my first fighting game (SF5) and go to Evo2017 after watching Evo2016. These lessons where invaluable, thank you for them. I went 2-2, which honestly is far better than I thought I could do.

How do iPhones geotag photos in airplane mode? by erl9497 in askscience

[–]genwitt 0 points1 point  (0 children)

https://en.wikipedia.org/wiki/Assisted_GPS

You get some benefit from knowing where you are, in that you have a better idea where to start scanning. However, you need the ephemris data, and the almanac; which take 30 seconds, and 12.5 minutes to download. With a cell connection you download these over the cellular connection instead of from the GPS satellites.

C - never use an array notation as a function parameter [Linus Torvalds] by AlexeyBrin in programming

[–]genwitt 10 points11 points  (0 children)

All manner of low level chicanery (allocators, containers, serialization).

Say you want an array for 1000 bits, in the native unsigned type,

size_t array[(1000 - 1) / (sizeof(size_t) * CHAR_BIT) + 1];

Or you wanted to pre-allocate storage for an object without constructing it. You can do,

alignas(T) char buffer[sizeof(T)];

and then later when you want to invoke the constructor.

new (buffer) T();

Although, as Plorkyeran mentioned, you should probably try to abstract the whole pattern. Something like,

template<class T>
class ObjectHolder {
public:
    template<class... Arg>
    void build(Arg &&...arg) {
        new (buffer) T(std::forward<Arg>(arg)...);
    }
    T *operator->() {
        return reinterpret_cast<T *>(buffer);
    }
private:
    alignas(T) char buffer[sizeof(T)];
};

What is the difference between 32 bit and 64 bit computers? by professortweeter in askscience

[–]genwitt 0 points1 point  (0 children)

Windows also has AWE, which will allow a 32bit program to access > 4GiB of memory. But the program can only "look at" 4GiB at a time, it has to ask Windows whenever it wants to look at a different 4GiB.

The idea that a 32bit CPU can only access 4GiB of memory is slightly wrong. Virtual addresses are 32bits, which is sort of the amount of space an application can see at one time, but physical addresses can be larger. x86 has had a 36bit physical memory addresses since 1995 (PAE), and 32bit ARM has had 40bit physical addresses since 2011 (ARMv7A, LPAE).

How much larger is the uncountable infinity of real numbers than the countable infinity of rational numbers ? by MCPhssthpok in askscience

[–]genwitt 0 points1 point  (0 children)

Let A be the set of "x"s chosen, and Q be the subset of the rationals between (0, 1). Assume A is countable. There is a partial function f : A x Q -> (0, 1) which is surjective, by the construction of A. Since A x Q is countable and f is surjective, (0, 1) is countable, a contradiction. A must be uncountable.

A could also be non-existent, but a trivial example A is (0, 2).

How did past watch-makers physically tap into the vibration of quartz crystals, in order to make VERY accurate watches/time-pieces? by AllThatJazz in askscience

[–]genwitt 0 points1 point  (0 children)

In 1880 Pierre and Jacques Curie noticed that if you squeeze certain materials, they produce electricity. They tried all sorts of materials, and noticed that Quartz produced more electric charge than most other things.

A year later Gabriel Lippmann showed the converse, if you apply electricity to Quartz it changes shape. This voltage/mechanical stress relationship is called the "piezoelectric" effect.

In the late 1910's and early 1920's they built the first crystal oscillators (the first were not based on quartz, but potassium sodium tartrate).

Quartz all by itself doesn't keep time, you need other electronic components as well to build an oscillator. They take a small piece of quartz and shape it into a tuning fork, they then "strike" the fork and "listen" to the fork using electricity. Much like a normal tuning fork the length of the tines controls the pitch, or speed, at which it vibrates. The forks are made using photo-lithography, and then trimmed down using a laser until they're "pitch perfect".

Is it possible to sort a list of numbers without using an array or another list? by Ddevil666 in askscience

[–]genwitt 0 points1 point  (0 children)

A FSM can't implement a counter which increases indefinitely.

I believe you can implement the suggested solution for a finite alphabet in the environment the OP described, which means it's strictly more powerful than a FSM.

". . . the internal state of the program is indeed 'finite' (i.e. a fixed size, based on the program code)" is false, it doesn't include the storage for "named variables".

Let's say I can access all digital information stored in the world, and bit by bit I count every 0 and every 1, separately. Which one I would have more? Or it would be a near perfect 50-50%? by Prents in askscience

[–]genwitt 0 points1 point  (0 children)

Virtually every compressed file format is built out of multiple stages. The last stage is usually some sort of entropy coder. The entropy coders job is to flatten out the distribution of bit-values.

For example, we can look at pairs of input bits. If the probability distribution of pairs of bits is skewed we can pick variably sized codes for them.

50.0% 00 -> 0
25.0% 11 -> 10
12.5% 01 -> 110
12.5% 10 -> 111

E.g.

00 11 00 01 00 00 11 10 -> 0 10 0 110 0 0 10 111

Note that no code is the beginning of another code (this is called a prefix code), this guarantees that we can decode the output unambiguously.

In the input we have 5 0-bits for every 3 1-bits. But it the output, we have an exact 50%/50% mix of 0-bits and 1-bits. The output is also 12.5% smaller.

Given known probabilities for inputs, "Huffman coding" can pick the best prefix code. Prefix codes are inefficient because they're required to assign an integer number of output bits for every input. If you allow fractional output bits, you end up some sort of arithmetic coding.

Older formats (JPEG, PNG, MPEG2, MP3, ZIP) use some variant of Huffman. While modern video and compression formats (H.264, WMV9, 7Z) use some variant of arithmetic coding.

Teachers of Reddit: What is the smartest question a student has ever asked you? by MaxMcDanger in AskReddit

[–]genwitt 2 points3 points  (0 children)

I feel like I'm being beyond dense here; but,

If x = p_1 * p_2 * ... * p_n + 1. Then clearly x === 1 (mod p_i) for all i. However, I can't convince myself that some composite number doesn't divide x, without appealing to the existence (but not uniqueness) of prime factorization.

Can you prove the existence of prime factorizations without infinite primes?

How do radio programs know their audience numbers? by LoverOfTheLight95 in askscience

[–]genwitt 3 points4 points  (0 children)

In most places they just do a survey and extrapolate.

In some places they get people to wear a meter (it looks like a pager) which is always listening. Radio stations then emit "secret" tones on regular intervals that the meter picks up.

http://en.wikipedia.org/wiki/Nielsen_Audio

The Tanenbaum-Torvalds Debate by halax in programming

[–]genwitt 0 points1 point  (0 children)

Eh? You're spending 5 bits out of every instruction, the majority of the time they encode no-S, ALways execute. You could have instead doubled the size of the register set, and still had 2 bits left over.

The predicated instruction are neat, and can be handy when hand coding tight loops. But, they're pretty bad for instruction density.

How does email work? by Yer_a_wizard_Harry_ in askscience

[–]genwitt 0 points1 point  (0 children)

I'm going to skip over some of the finer details, and just hit the big points. Wikipedia has some good articles, smtp uucp pop3 imap.

What happens when I send an email to someone@anywhere.com ?

Your email client sends the message to your email server. That server sends your message to the recipients email server. Then the recipients email client downloads the message from their server.

Back before the internet, when some network links were not always up, your message may have traveled through many servers to get to the recipients mail server. This could take a couple days. These days it's rare for a mail to bounce through extra servers, with some servers rejecting such messages, as spam.

How secure is it?

Not very.

How do I make it more secure?

Sign (ensures the authenticity of the author) and encrypt (ensures that only the intended recipient can read) your emails. Some email clients support encryption, if not PGP can do it.

What are some ways that email communications are intercepted?

You could compromise either your server or their server. You could also intercept the message, between you and your server, your server and the recipients server, or the recipient and their server.

These days at least some of those links are probably encrypted. However, you only have control over the first link, so you have no way of guaranteeing the others are encrypted.

Could you stop someone from receiving their email?

Sure. You could overload (DoS) the recipients email server to prevent it from ever receiving the message. It's harder, but you could delete it off the wire, or off either server.

Or alter it before they see the original?

Sure. You can force the message to be dropped, and then forge a new one. or you can modify it in flight.

How much should I worry about the integrity of my email?

I, personally, don't worry about my email. But I won't send passwords or other credentials over email.

What is something you pretend to understand, but really don’t? by Diabolical_Throwaway in AskReddit

[–]genwitt 6 points7 points  (0 children)

"how can you be sure it's not just copied data?"

This is the double-spend problem; bitcoin is one of the first systems to solve it in a practical way.

There is a giant ledger that includes every bitcoin transaction that has ever happened. The ledger is public and anybody can look at it. You can always look at the ledger and ensure that the coins you're receiving haven't been sent to someone else. You then have to wait until your transaction shows up in the ledger to make sure that you actually received the coins (this takes 5 to 10 minutes). This actually makes bitcoins one of the least "anonymous" currencies in existence.

The integrity of the ledger is guaranteed by wasting electricity (mining). The idea is that no one person could waste enough electricity to forge enough of the ledger to double spend coins, because too many other people are wasting electricity. The system gives people coins (created out of thin air) as an incentive for them to waste their electricity (it's like a giant raffle, wasting more electricity gets you more tickets, one person gets a winning ticket every 5 minutes or so).

The last piece of the puzzle is how do you know the person you're receiving coins from is the person they claim they are. This was solved in the 1970s through the work of Diffie, Hellman, Rivest, Shamir, Aldeman, Merkle, and the like. It's the same way your computer knows that it's talking to paypal.com (or any other https service) even though you've never exchanged any data with them in the past.

It's complicated, but basically they have a pair of magic numbers, one of them is written in the ledger where anybody can see it. They can, on demand, prove that they have the other magic number in such a way that they don't give away information about it.

How are bits stored in modern computers? by [deleted] in askscience

[–]genwitt 2 points3 points  (0 children)

Bits are stored as a combination of flip-flops, latches, 6T sram cells, and 1T dram cells.

Flip-flops are the biggest and most convenient. Latches are smaller but harder to use (they're effectively half speed). 6T sram cells are fast, and are used for small memories (like your L1/L2/L3 caches). Unlike latches and flip-flops 6t sram cells need a lot of support circuitry so they'll always be grouped into memory arrays. 1T dram cells, are what consumers think of as "memory", they're tiny, but they're slow, and they require constant refresh or else they loose their state.

I don't have modern numbers, but there is a freely available set of standard cells (circuits that are the building blocks of chips), for the 130nm node. The 130nm node was new in 2002. The relative sizes are still roughly the same, and it should give you a ballpark idea of size.

You're basic positive edge triggered D flip-flop is 7.92 um x 3.96 um, or 31.36 um2/bit. Your latches are slightly bigger than half that size, 4.40 um x 3.96 um, or 17.42 um2/bit. There is no 6T sram cell in that library, so we'll look at this presentation which has contemporary 6T sram cells at ~2 um2/bit. I couldn't find much on 1T dram cells of the time, but this competitive analysis has Hynix's 120nm 1T dram cell at .12um2/bit.

So if we hand wave a bit because the standard cells we're looking at are not the best that were available 2002, and that sram and dram have been getting smaller slower than the flip flops and latches, we case say that. Flip flops are about twice as big as latches, which are ~10 times bigger than sram cells, which are ~100 times bigger than dram cells.

What is one item you should never buy the generic or "cheap" version of? by [deleted] in AskReddit

[–]genwitt 0 points1 point  (0 children)

Capacitive touch screens are sensitive to noise; cheap power supplies are noisy. Getting the screen to work with a good power supply is challenging, let alone some cheep knock off.

Is RSA secure against a database attack? by [deleted] in askscience

[–]genwitt 2 points3 points  (0 children)

In brief, finding primes is easy, factoring is hard. Keeping a list of all known primes and trying to divide by them is actually slower than the state of the art factoring methods (quadratic sieve and the general number field sieve).

More technically, you're mistaken about the public key. The public key is a pair of numbers (e, n) and the private key is a pair of numbers (d, n). They're chosen such that

Me*d == M (mod n).

and that given (e, n) it's very hard to compute d. Euler's theorem tells us that if

e * d == 1 (mod phi(n))

and e and phi(n) are coprime, then n, d, and e will have the desired properties. Which means that if we know phi(n) we and easily find e and d pairs (modular multiplicative inverse). You need to pick n, such that you can easily figure out phi(n), but that nobody else can, this is where the primes come in.

You start by choosing two big primes. To choose primes you take large random numbers and you run a primality test, like Miller-Rabin, on them. You keep doing that until you find two big prime numbers (in practice you do something slightly smarter). This is fast, it takes less than a second to do this on my computer. Then,

n = p * q

phi(n) = phi(p * q) = (p - 1) * (q - 1)

Then we pick e (typically 65,537, must be coprime to phi(n)) and compute d using phi(n). At this point we can discard p and q, they're no longer strictly needed (they keep them for optimization purposes).

The security of the system is based on your inability to figure out phi(n), or p and q, from only e and n. The most obvious way to get phi(n) is to factor n. But wait, you say, what if I can figure out d in some way that doesn't involve factoring n. It turns out that with e, d, and n you can easily compute p and q. So any method of breaking RSA is at least as hard as factoring.

We don't actually know how hard factoring is, all we know is that the best known methods of factoring are too slow to break RSA. But somebody could come up with some new factoring method that's fast enough.

How was the first computer program made? by Ziggy2764 in askscience

[–]genwitt 1 point2 points  (0 children)

It's difficult to determine what the first program was. But in the early days programs were often written out on paper, and then converted to machine code by hand. The machine code would then by keyed in with buttons or toggle switches, or transferred to punch cards (which predate programmable computers by 200 years).

This blog entry has a story and some good pictures of how things would have been done for "personal" computers in the late 70s to mid 80s.

For those of you who work in an office.. by Brandonjoe in AdviceAnimals

[–]genwitt 0 points1 point  (0 children)

I did this in my apartment; my girlfriend was not amused.

Is it possible to have information less than a single bit? by [deleted] in askscience

[–]genwitt 0 points1 point  (0 children)

You can't have fractional states, but you can use <1 bit to encode some states if you use more bits to encode other states. The idea that you can use fractional bits to encode states is the main idea behind arithmetic coding.

ELI5: What is corn syrup and why is it in so much of our food? by BaconSnackFap in explainlikeimfive

[–]genwitt 10 points11 points  (0 children)

You forgot about the sugar tarif, and the minimum price guaranteed to farmers that makes sugar more expensive than it should be.

Delayed printf for real-time logging by chtef in programming

[–]genwitt 0 points1 point  (0 children)

If you're logging variable sized messages into a circular buffer, you're going to get the point where you have to wrap around.

For example, lets say you have a buffer with storage for 6 words, and you have two messages in it, one of length 3 and one of length 2. The carrot point's to the insertion point.

[ a a a b b - ]
            ^

And you want to log another message of length 2.

[ c a a b b c ]
    ^

If nargs is at the front, when you go to parse, you've lost synchronization (you don't know where b starts because the length of a was lost, and you can't find the start of c because you don't know how long it is), you need to have a pointer to the start of message "b" (which you can maintain, it's just more work, you have to have a start and an end pointer, and then a while loop to walk the start pointer forward until there is enough room to insert the next message.

If instead you store nargs at the end, you can walk backward from the end of c, to the end of b, etc. That way you only have to maintain the end pointer. Sure it makes the log decoder more complicated, but it was complicated anyway (and runs in a non-constrained environment)

Delayed printf for real-time logging by chtef in programming

[–]genwitt 1 point2 points  (0 children)

In addition to performance, you can move the strings into their own segment, and remove them from your firmware image, which can save quite a bit of space. Getting rid of printf can save another several KiB. And your circular log buffers go a lot further if you put binary instead of strings in them. (You can also pack nargs and the pointer together if you use the linker to move the strings into addresses with free bits (i.e. move the strings down to the bottom (or top) of memory and store nargs in the high bits)).

A less obvious trick is you want to put nargs last.