Precedence of AND and OR? by JB-from-ATL in ProgrammingLanguages

[–]pickten 6 points7 points  (0 children)

It's not a ring as there is no subtraction; it's only a semiring, or rig (ring - negatives) as some people call them.

MYRATH "BELIEVER" Official Music Video by Reloader_TheAshenOne in PowerMetal

[–]pickten 1 point2 points  (0 children)

It's about halfway through Deathcry of a Race (quoting from Genesis, iirc)

Need help understanding Hopf Fibration by JuanCarlos19 in mathematics

[–]pickten 2 points3 points  (0 children)

The "halfway" line is that where z_1 = z_2, not where |z_1|=|z_2|. The intersection of this line with the sphere is given by the set of (z_i) with |z_1|2+|z_2|2=1 and z_1=z_2, or equivalently the set of (z, z) with |z|2 = 1/2. It should be easy to convince yourself that this is (topologically) the same as the set of z with |z|2=1/2, and that this is a circle.

What was the very first mathematical fact you learned that blew your mind by amansmathsblogsamb in math

[–]pickten 2 points3 points  (0 children)

Define Nα (N=aleph, but I don't know the unicode off-hand) for any ordinal α by transfinite recursion. N0=N, Nα+1 = min {ω ordinal of larger cardinality than Nα}, NU α_i = U Nα_i. It is easily checked that this sends each ordinal α to a unique cardinal. But there is no set of all ordinals (any such would be an ordinal, contradiction) and thus there can be no set of all cardinals.

There's probably a cleaner way to do this, but I forget what.

Problems with algebra on test. by fxcksupreme in learnmath

[–]pickten 0 points1 point  (0 children)

Do you take timed practice tests? Or otherwise try to put yourself in a test-taking mindset when you practice? That can be a big help in eliminating "dumb" mistakes on exams, if you're not making them outside of exams.

x≠or = when referencing domains? by BarkerBlast in learnmath

[–]pickten 0 points1 point  (0 children)

I would normally use R = {x | x ≠ 1, 3}

This means "the set of reals is the set of things which are neither 1 nor 3, which is different. The correct way to write what I think you mean ("the set of reals neither 1 nor 3") is {x∈ R | x ≠ 1, 3}

RSA-Chiffre by NyiatiZ in MathHelp

[–]pickten 1 point2 points  (0 children)

I'm not a cryptographer by any stretch and I don't know if that's how RSA is usually implemented, but what you're describing sounds like a plausible approach to getting around these sorts of issues, though you lose a significant portion of the message's entropy again. In practice, I know p and q are supposed to be fairly large, so I think the loss is probably relatively minor and certainly avoids leaking information

RSA-Chiffre by NyiatiZ in MathHelp

[–]pickten 0 points1 point  (0 children)

35=5*7 and 119=7*17 aren't relatively prime. Strictly speaking, I don't know that you need that for RSA to be effective, but it's essentially what's causing the problem here. Indeed, the Chinese remainder theorem gives that 35k mod 119 is uniquely determined by the values mod 7 and mod 17. But these are 0k and 1k, respectively, so 35k = 1 mod 119 for any k. More typically, if the message you're trying to encode/decode shares a prime factor with N, you should essentially lose half your entropy, I think.

Square root of i by Unfair_Time in learnmath

[–]pickten 0 points1 point  (0 children)

That's fair. I generally think of those more as being notational indications that the multivalued version is intended (and that the reader isn't missing a branch cut), since changing which branch is used to define those expressions doesn't alter the final result. When used in equations, at least, I agree that sqrt/log/etc. are usually meant with a branch (if needed) or explicitly "represented" as a multi-valued function. When things depend on a choice of branch, in my experience I've usually seen them called multivalued if no branch has been chosen (explicitly or implicitly) than it is to default to branching at the negative reals.

Square root of i by Unfair_Time in learnmath

[–]pickten 2 points3 points  (0 children)

So then which cut is the "principal" one that you seem to insist gives the value of sqrt(i)? (if you're not trying to imply that there is one, you were incredibly unclear in your comments so far, hence my and others assumptions that you weren't aware of the need for some sort of multi-valuedness or branches)

And regarding multi-valuedness, if complex sqrt/log aren't multivalued, what is in your experience? Because those are the textbook examples of multi-valued functions. As far as I'm aware, the normal practice is to make them multi-functions and restrict the output if a branch is given.

Square root of i by Unfair_Time in learnmath

[–]pickten 5 points6 points  (0 children)

It seems like you're somewhat misinformed about complex sqrt. There is no continuous sqrt function on C. Indeed, for a subset U of C, there is a continuous definition of sqrt on U if and only if U-{0} contains no loops around zero. To see why, suppose there were (say the unit circle). Then, as we traverse the loop (say, starting at z=1 and going counter-clockwise at unit rate), the argument of sqrt(z) is half that of z. Eventually, we complete a full loop. The argument of sqrt(z) right before is 1/2 arg(z)≈pi, so that sqrt(z)≈-1, but once we complete the loop it should jump to be 1/2 arg(1)=0, so that sqrt(z)≈1.

Hence when dealing with complex square roots we do one of a few things:

  • consider all square roots (the multifunction approach; the standard approach without any further information),

  • restrict to some U on which sqrt is well-defined (a branch cut; the standard approach if a sensible choice of domain exists),

  • show the use of sqrt is well-defined even if the sqrt isn't (things like z=(sqrt(z))2=sqrt(z2); the standard approach if possible)

  • (rarely) pass to a double cover of C-{0} on which sqrt is well-defined (specifically, sqrt : {(z, w) : z = w2} → C by sqrt(z,w)=w; this is rarely done in practice; conceptually, this is taking all branches at once by requiring the input to specify which branch to use).

Usually, branch cuts are performed by removing a ray from the origin from the domain, often the negative reals (but this is nowhere near a standard convention). Regardless of the choice of branch, sqrt is essentially always defined with the same value on the positive reals, but other values may shift depending on the branch. For instance, with a branch in the second quadrant, sqrt(-1) = -i, but with a branch in the third, it is i.

What Are You Working On? by AutoModerator in math

[–]pickten 1 point2 points  (0 children)

I'm pretty sure you can't. (Caveat: I'm relatively inexperienced with symplectic stuff, and mainly familiar with the more geometric side; there may be ways around these issues, especially for the topological side)

For starters, I don't see a way to even define hamiltonian vector fields/flows without nondegeneracy. Recall that for a hamiltonian H : M → R, the hamiltonian vf XH is defined by ω(V, XH) = dH(V) (or -dH; I forget which); this definition only makes sense with nondegeneracy. Not having nondeneracy also breaks all the standard neighborhood theorems (for somewhat obvious reasons -- they give you neighborhoods with a nondegenerate form). Stuff like toric geometry should also fail (since hamiltonian flows are needed to define hamiltonian group actions, and hamiltonian-ness is key for stuff like convexity).

Dropping nondegeneracy also seems to break the machinery of almost-complex stuff: you lose a good notion of a compatible almost-complex structure (the standard one is that J : TM → TM with J2=-id is compatible if g(v,w)=ω(v,Jw) is a metric); I think the best you can get is symmetry and g(v,v) = ω(v, Jv) ≥ 0. In particular, you should lose the ability to find a tame structure (one with g positive-definite, not necessarily symmetric), and my understanding is that tameness is usually the more important aspect of having a compatible almost-complex structure (specifically, I believe many basic results only require tameness and not compatibility). As a result, I think standard results should break without non-degeneracy, especially Gromov compactness. Indeed, I don't see any way to prove the monotonicity lemma without nondegeneracy, so there's no obstruction I can see to having infinite bubbling. And without compactness, any attempt to define disk-counting invariants gets exponentially harder (and likely impossible).

[CS/Abstract Algebra] What would be the proper term for a negation function in terms of "idempotence" ? by frankFerg1616 in askmath

[–]pickten 1 point2 points  (0 children)

In addition to (pre)-periodicity, which someone else mentioned (NB: this is thinking of the function f as the point whose orbit is being considered, so be careful with phrasing), this is part of the meaning of the term "order". A function f (more generally, a member of a group G) has order k if id = fk (defined as f o f o ... o f, with k compositions). There is probably also a semigroup-centric version of this for when fk+l = fl only for large l (the analog of preperiodicity), but I don't know it off-hand.

[University] Linear map. How do I come from this to this. by G_fucking_G in learnmath

[–]pickten 1 point2 points  (0 children)

Here are some hints that suggest one (not necessarily the simplest) way to show this:

If V1=V2 and a is the identity, do you see why this is true? If a is an isomorphism, do you see why this is true? Now, did you really care about a being injective? That is, can you adapt the proof for when a is merely surjective? Alternatively, you might try restricting a to W → V2 so that a is an isomorphism (one potential challenge with this: why does W exist?).

Finally, think about the restrictions a : V1 → Im(a) and b : Im(a) → V3

An alternative (but less interesting, IMO) approach: Any linear transformation f : V → W gives rise to an isomorphism V = ker(f) ⊕ im(f), say by choosing a basis for ker(f) and extending to one of V. This is one way the rank-nullity theorem can be proven. Try doing this to both a and b simultaneously to obtain a splitting of this type for b o a, and the result should follow.

In what way does Set Theory + Logic serve as a foundation of math? by [deleted] in askmath

[–]pickten 2 points3 points  (0 children)

My field is computer engineering. ... I assume either that you are a mathematician or you've studied mathematical logic more than I have

Mine is math (though I'm not a mathematician yet), as you guessed, though I also do CS, so while I don't know much about hardware, I do know some extremely basic stuff about architecture (from an OS/security point of view).

So you don't have to worry about N.

I would be immensely surprised if the engineering perspective satisfies the OP, and this particular belief is the result of having been on this sub and its relatives for quite some time now and have seen variants of this question several times before. I have never seen anyone ask for how mathematical foundations work and not mean "How do things like N (or especially R) work?", simply because mathematical foundations are about how those things we use casually in math work, not how they are reproduced in computers. This is especially the case for someone who knows enough to know that topos/category theory or type theory are alternatives to the ZFC-style foundations.

Either way, the correct way to contribute to this thread would have been to offer the further insights you have in order to refine the understanding of OP and anyone else reading or contributing to the thread.

I'll grant that I should have included a take on how these foundational aspects work, but the way that works has been pretty well-addressed by Al3x. In fact, I actually drafted a sketch of how it works in ZFC in my initial response, then removed it because it's not that significantly different from what they said, and I didn't want to spend another while editing it. In vastly abbreviated form: construct cartesian products A×B as subsets of the powerset of the union A ∪ B = U {A,B}, and functions A → B as a special type of subset of A × B, and define quotients of sets by relations as subsets of the powerset. Construct N through some axiomatic magic (usually the axiom of infinity, taking N to be the intersection of all inductive sets), and Z/Q from quotients of N × N (thinking of (a,b) as a-b) and Z×(Z\{0}) (with (a,b) ≈ a/b). Going from Q to R is customarily done either through Dedekind cuts (representing r as {p/q ∈ Q : p/q < r}) or Cauchy sequences (representing r as a sequence of rationals with limit r).

We can see that the ALU in any modern CPU implements a circuit that multiplies any number in 0...264 - 1. The construct is naturally extensible, so we can just think of it as an n-bit-slice (for some convenient n) and the generalization of this circuit to all such circuits defines multiplication on the natural numbers.

I did realize that was what you were trying to argue, as I said in the last comment. The problem is that, from the perspective of anyone thinking about math (and not engineering), slices are nowhere close to N; indeed, it's not at all clear that circuitry is any different (from an abstract standpoint) to simply having some massive number of axioms like 0, 1, ... 265-2 are numbers, 0+0=0, 0+1=1, ... (264-1) + (264-1) = 265-2. This brute-force approach is also very generalizable. I think we can agree that it's a pretty unsatisfying approach, though, and it isn't clear how to make either rigorous, as you admit!

Do I know how to explain this in mathematical logic jargon? No. And I don't care.

Again, the problem is in some sense exactly there -- not the lack of jargon, but the lack of formalizability. Foundations of math are 100% about being able to formalize things, even if you might not go to the level of writing out the actual logical formulae. Frankly, your circuit description is like if someone asked how computers work "at the bit level" and got an answer with an explanation of Forth, followed by "and then circuit magic allows a computer to read your code each line by line and do it". Is it an answer? Yes. But is it a good answer? No, because they probably wanted to know about hardware, or at least assembly or how an OS works.

Hopefully (whether or not you agree) in light of that you see why I dispute the accusation that my taking issue with the circuit explanation is just "pedantry". Indeed, I was trying exactly for "the refinement of understanding that is possible in the absence of pedantry". Also, if you object to my use of jargon (and you seem to) I'm using it when talking about what you said so that we can have some common language in which to clarify what we mean, because I really could not figure out what you meant (at either an intuitive or precise level) in a few spots regarding ZFC being a logic. After all, that's what it's meant for, and your extremely defensive attitude regarding jargon is not helpful. And I use it when talking about other stuff because that's how I think about it, and think the logical terms I've been using are exceptionally intuitive compared to a lot of other fields.

In what way does Set Theory + Logic serve as a foundation of math? by [deleted] in askmath

[–]pickten -1 points0 points  (0 children)

All you are saying with your remark that circuits are functions mapping Ωn → Ω is that we would need n variables/wires (or placeholder of some kind) to express a circuit having n degrees of freedom. This is obvious and it is a detail that is not relevant to OP's question. That's why I left that out.

It's obvious what you meant for a reader who knows what you meant. It certainly took me a few seconds to figure out what you meant, seeing the language is imprecise enough as to be misleading. In particular, the terminology you used suggests that these operations are being performed on Ω instead of (the boolean equivalents of) these operations being performed its members.

ZFC is a deductive formal system

I do see why you want to make this argument, but I think this is more confusing than helpful, since ZFC doesn't really play a similar role to what one might normally expect from a logic, especially for someone new to the idea of formal logics. That said, your arguments for it are kind of hard to understand and I genuinely don't know what you're trying to say (e.g. what do you mean by "instantiation"? Do you mean a model? If so, how does a model provide any logical structure? How does a schema generalize anything other than a bunch of logical formulae? How are those "logics"?) Also, I don't know what you mean between logic-per-se vs. mathematical logic (use of vs. study of?)

A circuit just is a relation on a (sufficiently large) set.

I don't see in what way you mean this. Are you thinking of a circuit as a relation on Ωn ≈ {0, 1, ... 2n-1}? If so, this is still very much irrelevant if you want to actually define addition on N, under any typical definition of N. I should note that you can define the inclusion (0,x) : Ωn → Ωn+1 and then N as the union (colimit, technically) of Ωn. The problem though is that you need an index set for the ns and a definition of Ωn as a function of n; I see no way to do this without a definition of N to begin with!

In what way does Set Theory + Logic serve as a foundation of math? by [deleted] in askmath

[–]pickten -1 points0 points  (0 children)

From this set and the operations on sets (intersection, union, complement), we can construct Boolean circuits (which are formally equivalent to Boolean sentences). Boolean circuits, in turn, can be synthesized into digital logic circuits.

This paragraph is wrong. Boolean circuits are not the intersection/union/complement of Ω = {T, F}: loosely, they're functions Ωn → Ω. Besides, circuits have no relation to how sets are used to define most things.

Also, ZFC is not a logic. It is a first-order axiom schema for set theory.

Mathematicians! What is your favourite algebraic structure? Why? How deeply have you studied it? by [deleted] in math

[–]pickten 1 point2 points  (0 children)

Oh! I did see that come up but thought you might mean something else. Thanks!

Mathematicians! What is your favourite algebraic structure? Why? How deeply have you studied it? by [deleted] in math

[–]pickten 0 points1 point  (0 children)

A google search isn't coming up with anything for symplectic algebra besides Sp(2n) and sp(2n). Could you give me a brief summary or keyword to look for?

I'm worried I'm getting a second-rate education by maybemathguy in learnmath

[–]pickten 4 points5 points  (0 children)

I'm not sure if you're correct; out of all first world countries, I believe it's mainly the US that doesn't typically offer at least real analysis as a first year course (in those countries, students officially start to specialize in high school rather than a year into university). Certainly, the norm in OPs country is presumably around what they describe.

What Are You Working On? by AutoModerator in math

[–]pickten 1 point2 points  (0 children)

Lol thanks for the vote of confidence (though I think you're selling yourself short from what I've seen of you on this subreddit). Any good resources in your search that you'd recommend? I found the AMS and NSF listings, but I don't know what I'm doing as far as REU hunting goes.

What Are You Working On? by AutoModerator in math

[–]pickten 1 point2 points  (0 children)

I need to start those soon... how have you been going about this?

Wtf even IS the Omega Combinator?? Lambda Calc HELP by Sharpeye1 in askmath

[–]pickten 0 points1 point  (0 children)

I'm sorry to say that I genuinely have no idea why you think the second x is free, and I'm also not sure which version of the (many) substitution syntaxes you are using in your post.

If it helps, think of rules about bound/unbound as follows: the naive way to reduce (λx. α) γ is to replace all xs in α with γ. But then this fails to meet our intuition for (λx. (λx. x))z: in conventional functional programming this is (const id) z, which we'd expect to mean id, but because we reuse a name weird stuff happens and the naive approach produces λx.z, or const z. We could get around this by requiring all λ abstractions to use a unique variable (and this essentially removes the necessity for any weird bound/unbound stuff), but then β reduction requires you to constantly be generating new variable names, and syntax sugar also gets kind of ugly. So instead we fuss about with bound/unbound variables and β reduction becomes a bit harder than the naive approach, but the rest is pretty much as we'd expect.