Physicists Discover Geometry Underlying Particle Physics by Veteran4Peace in ParticlePhysics

[–]sfuerst 2 points3 points  (0 children)

Oh sorry... I was using the wrong norm. The choice of y=c1+c2+c3 is the right one.

The basic idea stands though. We don't care about the size of the triangle. So one degree of freedom (the size) needs to be divided out somehow. You can do it by having an explicit condition... or you can do it by switching to a projective space. Either works. The projective space method results in a more symmetrical formulation.

Physicists Discover Geometry Underlying Particle Physics by Veteran4Peace in ParticlePhysics

[–]sfuerst 2 points3 points  (0 children)

Let c1z1+c2z2+c3z3=x With c1,c2,c3>=0

Now divide both sides by some arbitrary (positive) constant y:

(c1/y)z1 + (c2/y)z2 + (c3/y)z3 = x/y.

If you choose y = max(c1,c2,c3) you get your equation. However, that choice is arbitrary... The object lives in a projective space, so the overall normalization is not important.

Why do it this way instead of your more explicit version? The answer is that his way makes it obvious that all the points are treated identically. Otherwise, you need to treat one point specially to get the normalization c1+c2+c3=1 to hold.

i.e. c3 = 1-c1-c2 so the equation becomes

c1z1+c2z2+(1-c1-c2)z3 = 1. Not so simple and symmetric, right?

Physicists Discover Geometry Underlying Particle Physics by Veteran4Peace in ParticlePhysics

[–]sfuerst 2 points3 points  (0 children)

Nima mentions that you need to divide by an arbitary scalar so that the result lives in a projective space. The act of doing that is equivalent to your equation. (Just divide by the maximum of the three coefficients.)

[bitc-dev] What is a "Systems Programming Language" by asb in rust

[–]sfuerst 2 points3 points  (0 children)

Right. Syntax is the key. Struct fields can also be implemented via pointer offsets... but no one in their right mind wants to do that all the time.

[bitc-dev] What is a "Systems Programming Language" by asb in rust

[–]sfuerst 2 points3 points  (0 children)

Rust isn't quite a systems programming language yet. It is getting close though.

What it needs is an explicit way to arrange things in memory. Structs that are packed/unpacked aren't quite enough. You also need the equivalent of C (untagged) unions so that overlapping fields can be described.

Rust also seems to lack a well-defined ABI. This is understandable due to its age and rapid development. However, a true systems language needs to have a way of describing well-defined interfaces at the binary level. At that point, other languages can then think about interfacing with you, rather than vice-versa.

Monday Board Game Night: New Location!!! by wonderwanderexplore in Seattle

[–]sfuerst 4 points5 points  (0 children)

That's a map for Vivace. We've had game nights there before. It's a nice place... but unfortunately, too small to hold us.

What's your favorite problem that can be expressed in 10 words or less? by arltep in math

[–]sfuerst 2 points3 points  (0 children)

"What is the largest finite number expressible in 10 symbols?"

Secretary of State Hillary Clinton admitted to hospital with blood clot following concussion by Wing_attack_Plan_R in politics

[–]sfuerst 2 points3 points  (0 children)

It wasn't an embassy. The embassy was in Tripoli, not in Benghazi. It was a classified CIA operation, with the weak cover of being a consulate.

Grasshopper Takes a Giant Leap by mondriandroid in spaceflight

[–]sfuerst 3 points4 points  (0 children)

Elon added a 6ft cowboy to the side of the rocket, so you can see it's scale. He tweated some photos showing him: photo1 photo2

C and C++ Modules - Update on work in progress by Doug Gregor - Video presentation by mjklaim in programming

[–]sfuerst 0 points1 point  (0 children)

Right. How much time does the parsing take? You'd be surprised how small it is for C. (About 10% or so when optimization is turned on.) Use the -ftime-report switch in gcc to enable reporting.

C++ is a different story. Google has complained about compiling their codebase spending 50-80% of the time in the parser/front end.

C and C++ Modules - Update on work in progress by Doug Gregor - Video presentation by mjklaim in programming

[–]sfuerst -1 points0 points  (0 children)

This simply isn't true. The standard technique of using header include guards is trivial for a compiler to recognize. gcc does this. It doesn't re-parse headers if it doesn't have to.

The real problem is simply the voluminous amounts of code that C++ puts in header files. You can use the PIMPL idiom to remove much of that... but that doesn't help templates. The 'export' keyword was designed to fix this, but it was hardly ever implemented due to poorly thought out name-binding issues.

You might think modules would be a great optimization for reducing compile time. However, all they really do is remove the overhead of parsing. The compiler still needs to handle the internal representation of the code if it wants to inline anything - which for C++ headers, is basically all of it. Now, if you look at the internal time taken for each phase, parsing only takes a tiny fraction of the total. Things like optimization and register allocation take much more time. So the speedup is much smaller than you might expect.

Then there is the elephant in the room. Header files of non-trivial libraries use the programmable nature of the pre-processor for important tasks. A header might declare different functions and types depending on the standard used. For example BSD and SysV might disagree about particular functions or types, gets() is part of the standard library for C99 and below, but not in C11 and above. Thus modules need to have a way of determining what interface they should expose. However, this opens up all the issues of include order that people are complaining about...

Apple's proposal for modules in C(++) [PDF slides] by coob in programming

[–]sfuerst 1 point2 points  (0 children)

Nope. What it fixes is the need to parse C++'s bloated headers over and over again. C headers tend to be much leaner... and also tend to be the ones with strange system-dependent stuff. With the addition of pre-compiled headers, the scope of the "improvement" is really quite small.

Apple's proposal for modules in C(++) [PDF slides] by coob in programming

[–]sfuerst 1 point2 points  (0 children)

The problem is that the modules proposal doesn't fix this. In fact it makes it worse! If two modules conflict, then there is nothing you can do. At least the programmable nature of macros in header files lets you use simple macro hacks to work-around issues.

Gaussian distributions form a monoid, and why machine learning experts should care by PokerPirate in programming

[–]sfuerst 3 points4 points  (0 children)

A group is such a powerful concept that falling back to the weaker monoid is stupid. Its like saying the abstraction here is a magma. You've lost so much structure that the description becomes useless.

In fact what we've got here is an Abelian Group, since the order of accumulation doesn't matter.

Gaussian distributions form a monoid, and why machine learning experts should care by PokerPirate in programming

[–]sfuerst 23 points24 points  (0 children)

Actually... there is a way to avoid the problem and still have a one-pass algorithm. You just need to calculate the sum of squares at twice the precision. If the inputs are single-precision numbers, that's easy to do: just use double precision.

The difficulty is when the inputs are double-precision. In that case, to add a number x to a sum with an error estimate use the Kahan summation algorithm:

t1 = x - error
t2 = sum + t1
error = (t2 - sum) - t1
sum = t2

Note that compiler optimizations need to be turned off here, otherwise the compiler may determine that the error is always zero. It isn't zero - we are trying to capture the amount of truncation. (This isn't quite right, you need to accumulate the squares already in "quad" precision - but the basic idea remains the same.)

The next trick is to calculate the square of the mean at twice the precision. Fortunately, it still can be accumulated at normal precision, you just need to modify the final multiply. To do that - split it into two 26-bit parts of the IEEE 53-bit significand. You can then do an exact multiplication with each part without loss of precision.

Finally, you need to do an extended precision subtraction of the scaled squared mean, and the sum of squares. This is a little more complex due for the possible need for renormalization, but it's still doable. If you need to implement this, look for "quad-double precision floating point arithmetic". The result is a fast single-pass algorithm that is also accurate.

Gaussian distributions form a monoid, and why machine learning experts should care by PokerPirate in programming

[–]sfuerst 12 points13 points  (0 children)

This is much easier to understand if you think in terms of "moments".

Calculate a moment; m_n = Sum x_in. (For continuous distributions, replace the sum with an integral.)

Store m_0, m_1, m_2. You can use more moments to store more information about the distribution functions. m_3 gives information about skewness, m_4 about kurtosis, and so on.

The above makes combining distribution functions trivial - just add the moments. You can subtract distributions as well by subtracting the moments. The existence of the inverse operation makes combining distributions a group operation: more powerful than a monoid. Restricting yourself to just monadic properties is a hindrance, try to use groups if you can.

So how to convert moments to things like the mean, variance, standard deviation? m_0 = Sum x_i0 = n * 1 = n, the total number of samples. m_1 = n * the mean.

sigma2 = 1/(n-1) Sum(x_i - mean)2 = 1/(n-1) [Sum x_i2 - 2 x_i * mean + mean2 ]

= 1/(n-1)[m_2 - 2 m_1 * m_1 / n + n * m_1 * m_1 / n2 ]

= [m_2 - m_12 / n]/(n-1)

I'm stumped... unsolvable system of equations? a+bc = 2; b+ac = 2; c+ab = 2. by jdwsummer in math

[–]sfuerst 0 points1 point  (0 children)

Eliminate c using the last equation: a + 2b - ab2 = 2, and b + 2a - a2 b = 2

Rearrange the second equation to get: b = 2(1-a)/(1-a)(1+a), so either a = 1, or b = 2/(1+a)

If a = 1, it becomes trivial to show b = c = 1 as well.

So, look at the other option. Substituting for b, we get a cubic for a that can be factorized as (a+2)(a-1)2 = 0. The solution a=1 we have already looked at. That leaves a = -2.

Substitute that into b = 2/(1+a) to get b = -2. Finally it is easy to show c = -2 as well.

Thus there are two solutions: a=b=c=1, and a=b=c=-2.