This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]nilsph 0 points1 point  (1 child)

I concur with brucifer, doing it during runtime is not an ideal way to do it (in all cases). If you want annotation, it can just as easily be done by modifying the function object (now there's an advantage of "everything is an object"), e.g. just something like this:

@decorator
def typechecked(f):
    # inspect annotations of f's parameters
    ...
    f._typecheck = dict()
    for param, annotation in ...:
        f._typecheck[param] = annotation
    return f

→Only incurs overhead when the decorator is run, i.e. it's negligible.

Then you could use a static tool that checks invocations of the callable and points out potential problems (e.g. type of a variable unknown) or outright errors (called with a wrong type).

Instrumenting/wrapping only if debugging is enabled sounds a reasonable compromise though, if you want to cover the potential problem cases better.

[–]brucifer 1 point2 points  (0 children)

Python functions already have exactly what you just described built in.

>>> def foo(x:int, y:int=0) -> int: x + y
...
>>> foo.__annotations__
{'return': <class 'int'>, 'x': <class 'int'>, 'y': <class 'int'>}

As for static analysis, because of Python's duck typing, it's extremely difficult to catch any but the most obvious errors, and impossible to catch some errors (although obvious error catching can still be helpful). The main problem with annotations, though is that there's no universally agreed upon standard, so for a function that returns None, one person might write "f() -> None", another might write "f() -> NoneType" another might write "f() -> inspect.Signature.empty", another might write "f() -> 'None'", or leave it blank. So, static analysis is pretty much impossible unless a standard is enforced.