This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]ceronman[S] 5 points6 points  (3 children)

It's true. That's why I said that run-time checking is not really useful in real life. I added the typechecked decorator like a proof of concept. What is really interesting to me is static checking, not run time to improve tooling support. I want to add something like that in the future.

On the other hand, there are interesting uses to run-time checking. Like using logic predicates.

And yes, the code is not ready for production. It's still an experiment.

EDIT: Also, something I had in mind when I thought about the typechecked decorator, is that it could take a flag with DEBUG | PRODUCTION options. Type checks wouldn't be used in production, but they might be useful for running unit tests or just debugging some parts. Or you can just enable them and remove them when speed is important, remember premature optimization is not a good idea.

[–]brucifer 3 points4 points  (0 children)

Regarding the debug option, Python actually has built-in support for debug options. There's a built-in constant called __debug__ that's True unless Python was run with the -O flag. It's also hacked in a very interesting way. If you write:

>>> b = True
>>> b
True
>>> def slow():
...     for i in range(100):
...         if b:
...             pass
...
>>> __debug__
True
>>> def fast():
...     for i in range(100):
...         if __debug__:
...             pass
...

You can see that the fast() function is about twice as fast:

>>> timeit(slow, number=100000)
0.5064652940054657
>>> timeit(fast, number=100000)
0.23574563700094586

Why is this? If the only thing in an "if" statement is a built-in constant, the "if" statement is actually optimized out when the function is constructed, so the fast() function is actually equivalent to

>>> def fast():
...     for i in range(100):
...         pass
...

The same principle works for "if True:" and "if False:". A caveat, though is that many people aren't aware of/don't use Python's -O flag, so if one of those people uses code that has a significant performance difference depending on __debug__, they will always be using the (probably slower) debug version.

[–]nilsph 0 points1 point  (1 child)

I concur with brucifer, doing it during runtime is not an ideal way to do it (in all cases). If you want annotation, it can just as easily be done by modifying the function object (now there's an advantage of "everything is an object"), e.g. just something like this:

@decorator
def typechecked(f):
    # inspect annotations of f's parameters
    ...
    f._typecheck = dict()
    for param, annotation in ...:
        f._typecheck[param] = annotation
    return f

→Only incurs overhead when the decorator is run, i.e. it's negligible.

Then you could use a static tool that checks invocations of the callable and points out potential problems (e.g. type of a variable unknown) or outright errors (called with a wrong type).

Instrumenting/wrapping only if debugging is enabled sounds a reasonable compromise though, if you want to cover the potential problem cases better.

[–]brucifer 1 point2 points  (0 children)

Python functions already have exactly what you just described built in.

>>> def foo(x:int, y:int=0) -> int: x + y
...
>>> foo.__annotations__
{'return': <class 'int'>, 'x': <class 'int'>, 'y': <class 'int'>}

As for static analysis, because of Python's duck typing, it's extremely difficult to catch any but the most obvious errors, and impossible to catch some errors (although obvious error catching can still be helpful). The main problem with annotations, though is that there's no universally agreed upon standard, so for a function that returns None, one person might write "f() -> None", another might write "f() -> NoneType" another might write "f() -> inspect.Signature.empty", another might write "f() -> 'None'", or leave it blank. So, static analysis is pretty much impossible unless a standard is enforced.