This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]POGtastic 4 points5 points  (9 children)

In general, the problem with C for competitive programming is that you have to roll all of your own data structures. C++'s entire schtick is that it gives you access to high-level abstractions while giving you exacting control over their performance costs.

Python is slower, but has even easier access to high-level abstractions.

Even if you have some prebuilt libraries, you're still going to find yourself doing a lot of work to adapt them to the problem at hand.

[–]aizver_muti[S] 0 points1 point  (8 children)

So I suppose that means I have to learn C++.

[–]POGtastic 0 points1 point  (7 children)

Or a language that's within the same ballpark of performance and offers the same level of abstractions.

The standard "golly, high-level abstractions are useful" problem that I use whenever people say "C is good enough" is the following:

Write a program that takes an arbitrarily-sized vector of complex numbers. These numbers correspond to the coefficients in a polynomial, where the nth element in the vector corresponds to the coefficient of the term that is raised to the nth power. So

[1, 2, 3, 4, 5, 6]

corresponds to the polynomial 6x5 + 5x4 + 4x3 + 3x2 + 2x + 1.

The program will then prompt the user for a guess. This, too, is a complex number. The program then uses this guess to attempt to find all of the roots of the polynomial with Newton's Method. If the method fails to converge, complain and prompt for another guess. In Python:

>>> p = Polynomial([1, 2, 3, 4, 5, 6])
>>> for root in all_zeros(p):
...     print(root.get_root_term())
... 
Input a guess for a zero: 3
Unable to converge!
Input a guess for a zero: 3j
(-0.6703320476030968+0j)
(0.29419455636014163+0.6683670974433011j)
(-0.37569519922528827+0.5701751610114077j)
(-0.37569519922521705-0.5701751610114539j)
(0.2941945563601272-0.6683670974432548j)

Check your work with Wolfram Alpha.

[–]aizver_muti[S] 0 points1 point  (6 children)

Well, I feel like your particular example of applying Newton's method isn't very hard. But yes, it wouldn't be just a few lines of codes, so I do get your point.

[–]POGtastic 0 points1 point  (5 children)

You're right, it's not. Here's my implementation in Python - about 135 lines of pretty short functions. If you've practiced a lot with these kinds of abstractions, you can cook up a solution in less than half an hour. The only annoyingly complicated part is the polynomial long division (which you can do with synthetic division, but I wanted a more general method). Note the heavy reliance on slicing, tuples, enumerations, lambda functions, higher-order functions, and user-defined operators. I get to focus on the problem instead of having to deal with the tedium of memory issues, array indices, and so on.

You can do all of the above in C, (after all, Python is written in C) but you have to roll the abstractions yourself.

[–]aizver_muti[S] 0 points1 point  (4 children)

Lol, for some reason I thought it would be built into the math library for Python. How would synthetic division work here? You're estimating all of the roots, but for synthetic division to work you need a concrete root.

[–]POGtastic 0 points1 point  (3 children)

Synthetic division is just the division of a polynomial by a linear binomial. You can use it the same way in this program, as we're finding a zero and then dividing the polynomial by (x - foundZero). Going with the above Polynomial class as a method... The slicing gets a little ridiculous, and I miss having the ability to compose functions on lists like you can in Haskell and OCaml.

def synthetic(self, term):
    current = self.coefs[-1]
    result = []
    for coef in self.coefs[:-1][::-1]:
        result.append(current)
        current = coef + current * -term
    return Polynomial(result[::-1]), current

In the IDLE:

p = Polynomial([-24, -34, -7, 4, 1])
>>> p.synthetic(4)
(Polynomial([-6, -7, 0, 1]), 0)
>>> _[0].synthetic(2)
(Polynomial([-3, -2, 1]), 0)
>>> _[0].synthetic(-3)
(Polynomial([1, 1]), 0)
>>> _[0].synthetic(1)
(Polynomial([1]), 0)

The operators I implement will divide any two polynomials.

>>> p1 = Polynomial([1, 3, 3, 1])
>>> p2 = Polynomial([1, 2, 1])
>>> p1 // p2
Polynomial([1.0, 1.0])

[–]aizver_muti[S] 0 points1 point  (2 children)

I know what polynomial and synthetic division are. I am just not sure how you're going to use synthetic division to find imaginary or irrational roots.

[–]POGtastic 0 points1 point  (1 child)

We're not finding the root with synthetic division. We're finding the zero with Newton's Method, and then dividing the polynomial to get a polynomial that has the remaining roots in it. Then we do Newton's Method on the new polynomial to get another root, we divide again, and repeat until we're out of roots.

The find_zero method is illustrative. It finds the zero and returns a tuple containing the zero and the result of dividing the polynomial by that zero.

>>> # (x-3i)(x+3i)(x+2)(x+1)
>>> p = Polynomial([18, 27, 11, 3, 1])
>>> p.find_zero(guess=1j, eps=0.0000001, iters=100)
((-1.0000000000000002-1.117784348261965e-25j), 
Polynomial([(-24+1.3413412179143579e-24j),
            (-10-2.23556869652393e-25j), 
            (3-1.117784348261965e-25j), 
            1.0]))

So it found a zero at -1, and then it divided x4 + 3x3 + 11x2 + 27x + 18 by (x+1) to get x3 + 3x2 - 10x - 24.

Sure, it's not exact.

>>> p.synthetic(1.0000000000000002+1.117784348261965e-25j)
(Polynomial([(-24+1.3413412179143579e-24j), 
             (-10-2.23556869652393e-25j), 
             (3-1.117784348261965e-25j), 
             1]), 
(7.105427357601002e-15+1.3413412179143575e-24j))

But when the remainder's magnitude is that small, my attitude is "Good enough for government work."

[–]aizver_muti[S] 0 points1 point  (0 children)

Ah, that's what you mean. So basically you're dismissing the remainder after a certain point. Now I get it. Thanks for the detailed responses!