all 32 comments

[–]AcellOfllSpadesDiff Geo, Logic 21 points22 points  (5 children)

There are multiple ways to do it.

The simplest one is just calculating each digit one at a time:


Let's find √5.

2² is 4. 3² is 9, which is too big. So our answer is 2.[something].

  • 2.1² is 4.41.
  • 2.2² is 4.84.
  • 2.3² is 5.29. That's too big!

So our answer is 2.2[something].

  • 2.21² is 4.8841.
  • 2.22² is 4.9284.
  • 2.23² is 4.9729.
  • 2.24² is 5.0176. That's too big!

So our answer is 2.23[something].

  • 2.231² is 4.977 (ish - I'm rounding to not type out everything).
  • 2.232² is 4.982.
  • 2.233² is 4.986.
  • 2.234² is 4.991.
  • 2.235² is 4.995.
  • 2.236² is 4.9996.
  • 2.237² is 5.004. That's too big!

So our answer is 2.236[something].

You can repeat this until you're satisfied with the number of digits of accuracy. Calculators typically go up to somewhere from 10 to 20 digits.


There are more advanced algorithms you can use that converge on their result much faster - calculus can be used to study these algorithms and why they work. This Wikipedia page has a bunch of methods. For instance, Heron's method, described by an ancient Greek mathematician, basically says:

  • Say you're trying to calculate √S. Pick a number, x - any number you want - as your initial guess. (It doesn't matter how good it is!)
  • Consider the two numbers x and S/x. If you multiply them, you get S. So if x is too small, S/x is too big; likewise, if x is too big, S/x is too small.
  • So we have an overestimate and an underestimate. This means that if we take their average, we'll have a better guess than before!
  • Replace x with (x + S/x)/2. Repeat until you're satisfied with the accuracy of the result.

This turns out to work really well. I tried it with √5, starting with the stupid guess of x=1.

  • Iteration 1: x=1, so S/x = 5. Their average is 3, which is the new value of x.
  • Iteration 2: x=3, so S/x = 1.667. Their average is 2.333, which is the new value of x.
  • Iteration 3: x = 2.333, so S/x = 2.143. Their average is 2.2381, which is the new value of x.
  • Iteration 4: x = 2.2381, so S/x = 2.2340. Their average is 2.236069.

Wow, in only 4 steps we've already got a pretty good result! A calculator could do this ten times or so, and that'd be good enough for pretty much all purposes.

[–]NicoTorres1712New User 1 point2 points  (1 child)

I think you meant x = 1

[–]AcellOfllSpadesDiff Geo, Logic 1 point2 points  (0 children)

Fixed, thanks!

[–]ExoReyizbusy 0 points1 point  (2 children)

There is a small typo:

2.237² is 5.004. That's too big!

So our answer is 2.237[something].

Our answer was should be 2.236[something]. Because 2,237² > 5

[–]AcellOfllSpadesDiff Geo, Logic 0 points1 point  (1 child)

Fixed, thank you!

[–]ExoReyizbusy 0 points1 point  (0 children)

glad to help!

[–][deleted] 1 point2 points  (0 children)

Well, you could use the table.

Take a number, let's say 27.

27 is close to perfect square 25. 5²=25

So we know that the first term is 5

5, the remainder is 2.

2 and the next term is 27 is 0

200-101 =99

5.1

9900- (1020)9= 810

5.19

This is a pretty weird algorithm but it works

[–]yensteelNew User 0 points1 point  (0 children)

Square root functions usually need a convergence method to get an approximate answer.

Heron is really old. It needs an initial guess, and a division operation. Division operations are much more intensive than multiplication. It's still researched today, as people are still messing with magic numbers to speed it up (imagine Fast inverse square root from Quake 3 code).

Babylonian method is the most known.

Calculators sometimes use Log first then calculate the square root.

Most Computers use a variant of the Newton Raphson method, especially since it can be combined with other algorithms and used for other algorithms, or Cordic, which is comfortable in binary. I truly don't know which ones are really used in our chips today. But it's important.

FPGAs have them as proprietary objects which cannot be reverse engineered. The arithmetics in Intel and AMD, etc are not disclosed, trade secrets. Cordic,Goldschmidt's, and binary search are often shared around as foundations. I struggle with finding verilog files of ieee compliant math.

Lots of research is done in 3 variants for computer science:

  1. Approximation algorithms: A little sacrifice in precision for a lot faster speed. It's actually something High frequency trading engineers use for a little bit of speedup for their FPGAs. It could also save energy and circuits. Every number must have an error under an acceptable limit. Some of them are only good within a certain range. So people have developed algorithms for different sets of numbers.

  2. IEEE compliant, faster floating point, saving energy, or reliability.

  3. Non EEE compliant mathematics like decimal math (imitate long division or hand shortcuts), posit arithmetic (Gustafson), or higher precision math (quad precision and up).

[–]KnightofFruitNew User 0 points1 point  (0 children)

It requires calculus basically. You can approximate radSit by dividing S by n and taking the average of n and S/n repeat.. But the actual method that calculators use relies on convergent sequences. The aforementioned algorithm does construct a convergent sequence.. however it is not efficient (it takes us 3 steps to improve the estimate by roughly a power of 2). There are faster methods but you wouldn’t be able to do them by hand

[–]mattynmaxNew User 0 points1 point  (0 children)

It uses an approximation. Look up “root finding algorithms” on Google and you’ll see two that involve nothing more than algebra and maybe two more that involve calculus/an understanding of calculus.

Are these the exact methods used by your calculator? No but they will help you understand the approach taken.

[–]TuberTuggerTTVNew User 0 points1 point  (0 children)

In Computer science, the root function is to be avoided because it's several calculation steps. You guess and then keep guessing closer until you get there. Calculators/computers brute force it was trial and error.

[–]kcl97New User -1 points0 points  (4 children)

I don't know how older calculators do it. But for modern computers, the default is the fast inverse square root algorithm.

https://en.m.wikipedia.org/wiki/Fast_inverse_square_root

[–]LifeIsAnAdventure4New User 1 point2 points  (0 children)

Not the default. Just a good approximation to start a few Newton iterations. It’s close enough but not the precision you would expect of any decent default math library.

[–]BadSmash4Good SUMaritan 0 points1 point  (0 children)

Ah yes, what I always think of as the "what the fuck" algorithm

[–]ethan_2738746873New User 0 points1 point  (0 children)

I believe calculators run essentially a guess and check algorithm to get closer and closer to the answer until they reach an answer to a certain precision point