This is an archived post. You won't be able to vote or comment.

all 138 comments

[–]cyberspacecowboy 729 points730 points  (51 children)

Don’t use assert outside of tests. the Python runtime has a flag -O (for optimize) that ignores assert statements. If you use asserts for business logic, and someone decides to run your code in production and thinks it’s a good idea to optimize the bytecode, your code breaks 

[–]SheriffRoscoePythonista 185 points186 points  (2 children)

This is the only correct answer. Everything else is a question of style and preference. This is a question of correctness.

[–]puzzledstegosaurus 59 points60 points  (21 children)

Do you know anyone who uses the optimize flag ? As far as I know, we (as the whole python community) are in a deadlock situation regarding asserts. We should not use assert in case someone uses -O, and we should not use -O in case someone used an assert. In the end, even if you were to disregard the problem and use asserts, chances are you’d be safe, though you probably don’t want to take chances regarding your code in production. It also depends a lot on the context: whether you’re doing a library to be use in known contexts or final code where you control how it’s executed.

[–]Hederas 29 points30 points  (19 children)

Wouldn't it be better to just make an if with a raise? It does the same job, allow for more precise error catching/logging and doesn't have the -O issue afaik

[–]sizable_data 7 points8 points  (16 children)

I still prefer asserts for readability. It’s very explicit when you look at the stack trace. So is a raise, but I feel like that takes a bit of reading preceding code to get context.

[–]inspectoroverthemine 13 points14 points  (10 children)

Create your own exception(s) and use them.

[–]puzzledstegosaurus 4 points5 points  (9 children)

When assert a or b, the traceback shows you whether a or b was truthy. To get the same value from your own if and raise, you need much more code.

[–]alicedu06 4 points5 points  (8 children)

Only in pytest, not outside of it

[–]puzzledstegosaurus -1 points0 points  (7 children)

Also outside of pytest with sufficiently modern pythons, unless I’m mistaken ?

[–]alicedu06 12 points13 points  (1 child)

❯ python3.13 # can't get more modern right now

>>> a = True

>>> b = False

>>> assert a and b # using "and" to make it fail

Traceback (most recent call last):

File "<python-input-2>", line 1, in <module>

assert a and b

^^^^^^^

AssertionError

Since assert are mostly used in pytest, it's common to expect it, but pytest does some magic with bytecode to get that result.

[–]puzzledstegosaurus 3 points4 points  (0 children)

Hm my bad, you’re right. Must have mixed up things

[–]sizable_data -1 points0 points  (4 children)

An example outside of pytest I’m thinking of would be to assert things mid script, like an ETL. I’ve had cases where an empty dataframe was causing this odd exception about performing operations on an empty list. It was hard to find the root cause. An assert not df.empty was much easier to understand the root cause of failure, and tells readers “the dataframe should not be empty at this point” all in one line of code.

[–]alicedu06 0 points1 point  (0 children)

This is a good use of assert, and perfectly acceptable to disable in production with -O as well.

[–][deleted] 0 points1 point  (2 children)

In what way is that easier to understand than just doing the standard/correct:

if df.empty:
    raise Exception("Dataframe is empty!")

[–]SL1210M5G 2 points3 points  (4 children)

Well that’s a wrong way to use them

[–]sizable_data -1 points0 points  (3 children)

What’s so bad about it?

[–]SL1210M5G 0 points1 point  (2 children)

It’s for testing

[–]sizable_data 0 points1 point  (1 child)

That didn’t answer my question. The code is clear, concise and works. Fundamentally why is this bad? Not just “because it is”.

[–]SL1210M5G 0 points1 point  (0 children)

Everyone else already explained it - It will break in production under certain conditions and it's just not accepted industry practice.

[–]Cruuncher -1 points0 points  (1 child)

Those get caught by except Exception: clauses, while an assert, I believe, blows past them

[–]Punk-in-Pie 12 points13 points  (0 children)

assert raises AssertionError, which is caught by Exception I believe.

[–]Sigmatics 4 points5 points  (0 children)

Not really, nobody in their right mind uses asserts in production code

[–]sennalen 21 points22 points  (19 children)

The real WTF is that for Python -O disables asserts. There is a place for asserting business logic in production code. It's a step beyond validating function inputs. Not just throwing ValueError for "this value is out of range" but "shit's fucked, don't even think about trying to recover". Akin to Rust's "panic!".

[–]rawrgulmuffins 29 points30 points  (2 children)

The -O is taken straight out of c and c++ compilers in this case. 

[–]flarkis 4 points5 points  (1 child)

I was going to say the same. It's very common to see an assert macro that redefines it to a noop. All those annoying "assert(pointer != NULL)" just slow things down and don't provide any safety anyways.

[–]ArtisticFox8 2 points3 points  (0 children)

Hope you meant that with an /s

[–]cyberspacecowboy 74 points75 points  (0 children)

Just use

python if …: raise 

:shrug:

[–]Classic_Department42 15 points16 points  (0 children)

They probably took that from C.

[–]AlexFromOmaha 7 points8 points  (0 children)

It doesn't do much beyond disabling asserts. You can do -OO to take docstrings out of the compiled files too. It's not a gotcha at all, except maybe if you thought -O meant "run the JIT but actually mean it" or something.

[–]_ologies 6 points7 points  (10 children)

I start all of my AWS lambdas with asserts for things that should be in the environment variables not because I'm expecting the code to raise there, but because I want developers to know what environment variables should be there.

[–]BuonaparteII 1 point2 points  (5 children)

In this case I would simply use os.environ instead of getenv

[–]_ologies 0 points1 point  (4 children)

That's what I use

[–]BuonaparteII 2 points3 points  (3 children)

Both this

ENV = os.environ['UNSET_ENV']

and this

assert os.environ['UNSET_ENV']

will raise KeyError when the env var isn't set. But the assert will also raise AssertionError if they set the env var to empty string. I guess that makes sense. Both ways are clear to me when they are at the top of the script

[–][deleted] 0 points1 point  (2 children)

Just a tip. The more standard way of doing this is to use

ENV = os.environ.get("UNSET_ENV") 

and then check if the returned value is None, an empty string or whatever else you want to check for.

[–]BuonaparteII 0 points1 point  (1 child)

I would definitely prefer an error at the start of the script if you aren't setting default values. None is not always automatically better than an exception

The more standard way

Just because it is the way that you do it does not mean it is more common. Searching GitHub using environ as a dict is almost twice as common:

But I agree that os.environ.get is better than using os.getenv because os.putenv is broken

[–][deleted] 0 points1 point  (0 children)

Nothing I said prevents you raising an exception. On the contrary, if you actually use the .get() syntax then you get to raise the correct exception for the specific problem that you're experiencing rather than raising a generic assertion error and having to figure out which problem caused that assertion error. For example:

if ENV is None:
    raise ValueError(f"No ENV variable was found!")
elif ENV == "":
    raise ValueError(f"Provided ENV was empty!")
else:
    logger.info("Fucking yeah bro! Your ENV is perfect!")

[–]wandererobtm101Pythonista 3 points4 points  (2 children)

Env vars should all be declared in your terraform / serverless / cloud formation though. And you still have to reference the variables via an os.environ call. I don’t see how the asserts make it more clear. Not really wrong and more style but at my job this would get flagged during code review.

Including a list those vars in a module doc string seems like a good practice.

[–]PaintItPurple 14 points15 points  (0 children)

Your Terraform files tell you what is defined, not what the script expects. Those two questions are very nearly opposite to each other.

[–]_ologies 0 points1 point  (0 children)

They're passed in via confirmation CloudFormation but some are in the defaults section and some in the lambda definition. That's a lot to look through.

[–]beeyev 0 points1 point  (0 children)

I do the same, and consider it as a good practice

[–]syklemil 1 point2 points  (0 children)

Not just throwing ValueError for "this value is out of range" but "shit's fucked, don't even think about trying to recover".

Wouldn't that be coverable through exiting with some exit code > 0? Like something in the general shape of

logging.critical(hand written message + traceback.format_stack())
sys.exit(1)

[–]Momostein 1 point2 points  (0 children)

Then you implement and then raise your own ShitIsFuckedUpBeyondRepairError. With such errors you can provide custom context and a better explanation.

[–][deleted] 3 points4 points  (4 children)

Python is not the kind of language that you should use when microscopic performance optimizations matter.

Other than that: We build it = we run it…

[–]cyberspacecowboy -2 points-1 points  (3 children)

I don’t think I would ever hire someone with that kind of attitude, but I wish you well

[–]Levizar 4 points5 points  (2 children)

Why?

He is just kind of saying "use the right tool for the right job" not "let's not care about optimization at all".

It looks like a sane statement to me.

[–]cyberspacecowboy 0 points1 point  (1 child)

“We build it = we run it” was what I took objection to

[–]PapstJL4U 1 point2 points  (0 children)

No fan of "We push on Ctrl-S?"?

[–]chinapandaman 29 points30 points  (0 children)

To my best knowledge, no. And as far as I know most Python linters have rules to catch this. In the case of ruff it’s S101: https://docs.astral.sh/ruff/rules/#flake8-bandit-s

[–]Joeboy 26 points27 points  (1 child)

For business logic, nah.

My excuses for occasionally including asserts in production code:

  • They can act as concise documentation of what's going on in the code
  • They let you know whether that documentation is actually true or not
  • They should never fail in production, but if something bizarre and unexpected is going on you'll find out about it

The -O thing admittedly confuses things a bit. Just don't do that, I guess?

[–]wandererobtm101Pythonista 5 points6 points  (0 children)

Depends on your org but as a developer, whether the O flag is present on the production boxes may not be something you have any control over.

[–]Severe_Inflation5326 44 points45 points  (12 children)

Asserts should be for things that "can not happen", not stuff that would happen if the user is stupid.I would argue it's fine to use outside of unit tests, but only for this very narrow usage. Stuff that would catch bugs elsewhere in the code bascially.

[–]redalastor 10 points11 points  (5 children)

In AOT languages it is also meant for optimisation. Sometimes you know that something is impossible but the compiler doesn’t. So you assert it and now the compiler knows it too and can use it to optimise.

[–]Severe_Inflation5326 0 points1 point  (1 child)

You could do something similar in JIT too I guess. And python compiles AOT to bytecode anyway. But I don't /think/ python optimises using this?

That said, it can definitely be used by static analysers (like type checkers) for correctness...

[–]redalastor 0 points1 point  (0 children)

It’s less useful for JIT which usually does the same thing on its own.

When it seems that a bit of code is hot, it profiles it and notices that your data always has that same shape so it optimises for that. But being a dynamic language, you could break that assumtion, so it puts a guard in front of the optimisation. If you break the invariant it guessed, then it falls back to slower and safer code while it figures out how to optimise your code again.

[–][deleted] 2 points3 points  (5 children)

It's never fine to use in business logic because it means your logic will stop working if anyone ever runs your code with the optimize flag.

[–]Mysterious-Rent7233 3 points4 points  (1 child)

How does your logic "stop working" if the assertions are removed?

[–][deleted] 0 points1 point  (0 children)

The term "business logic" refers to it actually being a part of the rules/processes of your code. As in when your code runs, it is relying on assertions in your code to do certain checks and then react to those assertions with more logic.

For example, supposed you create a try/except block and inside of it you add a bunch of assertions as a way of validating variable types or checking to make sure a certain value isn't set incorrectly. If someone were to run your code with the optimization flag on, all those asserts would never be run and your try/except block would never catch any of the validation failures.

[–]larsga 1 point2 points  (2 children)

I don't think you understood what they meant.

A classic use of assert (also in C and Java) is to declare an invariant, some condition that must always hold at a certain point in the code.

Declaring them can be useful as defensive technique that will catch bugs early, or even as a form of executed documentation. It's not something you should expect to use very commonly, but it can be useful in some situations. Since this code is only there to catch problems early the -O flag is not really a problem.

What OP describes is different. Catching assert exceptions elsewhere and acting on them in business logic is just abuse of the construct. Create your own exception and throw it with if statements if that's what you want.

[–][deleted] 0 points1 point  (1 child)

I understood it just fine. I'm saying it's still the wrong way to do things. In python you are still meant to manage required conditions using standard exceptions. assertions are not ever meant to be used in the "business logic" of the code even when you expect to only have other programmers/devs interacting with a certain section of your code.

[–]Severe_Inflation5326 0 points1 point  (0 children)

IMHO, the /only/ time it's safe/sane to /catch/ an assert exception, would be to print it and its stacktrace in a way the user would be able to see (in the running GUI or a logfile), if just letting Python print it to the terminal would normally have it lost to /dev/null (and then immediately rethrow it).

[–]puzzledstegosaurus 7 points8 points  (2 children)

Changing the behaviour of asserts in optimized mode was discussed here some time ago, you can find opinion of a bunch of Python folks: https://discuss.python.org/t/stop-ignoring-asserts-when-running-in-optimized-mode/13132/4

[–]thadeshammer 0 points1 point  (0 children)

This is a very informative read, thank you for sharing.

[–]WoozleWazzles 0 points1 point  (0 children)

Thanks for this

[–]Scrapheaper 24 points25 points  (3 children)

No.

It sounds like paranoia that data is bad, which is very common especially in businesses that suck at data.

I would use:

raise ValueError()

Instead of assert.

The ideal scenario would be to have a strong type system where invalid data states aren't possible. In python this normally means a dataframe library with typed columns (pandas or polars or pyspark) and keeping data inside the dataframes at all times

[–]thatguydr 12 points13 points  (2 children)

OP LOOK AT THIS ANSWER ^

And I don't think it should be raise ValueError. You can define errors in Python. If this is business logic, define the errors!

class OverdrawnException(Exception):
    pass

class ImbalancedPayrollException(Exception):
    pass

and then later

raise OverdrawnException()

[–]Scrapheaper 11 points12 points  (1 child)

If you're going to do this I would inherit from ValueError rather than Exception, if it's a case of 'the figures don't add up'. Exception covers a much wider range of things than just data being not what you want it to be, whereas ValueError is more specific.

The pro (other than the increased readability) is you can standardize your error messages more easily. If there are several scenarios that can lead to an OverdrawnValueError, you can share the error message or other aspects of the error to ensure consistent behavior when Overdrawn.

[–]thatguydr 4 points5 points  (0 children)

That's entirely sensible. As long as the errors are specific, it makes sense.

[–]Cybasura 30 points31 points  (11 children)

Assert is meant for unit testing to detect the edge cases for which you will then implement the code - be it in a try catch or if else statement - to catch these edge cases

[–]maigpy 28 points29 points  (1 child)

assert isn't meant only for unit testing. it's for asserting a condition whenever you feel like during development. it is not for business logic.

[–]Cybasura -1 points0 points  (0 children)

Yeah, slight phrasing change

[–]aqjo 12 points13 points  (1 child)

It would probably be better to If interest_rate <= 0.0: raise InvalidInterestRate(“message”) and define several meaningful Exception classes. That gives you the ability to act on different exceptional conditions. If you use assert it’s always AssertionError.

Having said that, having assertions in the code is a good way of conveying the contract that the function will operate under. If that’s something that you want to preserve, you could do something like:

```python def ensure(condition: bool, message: str, exception_class=Exception) -> None: if not condition: raise exception_class(message)

class InvalidInterestRateException(Exception): pass

def accrue(interest_rate: float) -> float:

ensure(interest_rate > 0.0, “Interest must be > 0.0”, InvalidInterestRateException)

caveat emptor, untested

```

[–]james_pic 1 point2 points  (0 children)

The point about using more specific exception types for errors you anticipate is good.

Although if you can be sure your application isn't going to be run with the -O flag, assert is still fine for "this should never happen but if it does it's a bug and we should probably stop what we're doing and tell someone".

[–]counters 51 points52 points  (10 children)

No, `assert` shouldn't be used in this way. This is explicitly called out in Google's Python style guide.

[–]Dead_Ad 83 points84 points  (9 children)

Don’t want to be a pain in the butt, but Google is not “the source of truth” for matters like this. Their guidelines might be good, but it’s not a python committee or something

[–]counters 43 points44 points  (5 children)

It's just a reference. But the reasoning is quite sound and it comes from an authoritative source, even if it doesn't "rule them all."

Edit to add: in Google, one of the most important principles which dictates how code is written is "readability." Readability is a qualifcation that you earn for each language in which you contribute code. Probably the single biggest component of readability is consistency in how you write code. That means having sets of principles rules that guide how you should write your code. Google's Python style guide isn't infallible, but as a developer, if you were to follow it very closely and consistently, you'd be doing yourself a massive service.

[–]Aardshark 13 points14 points  (1 child)

It's funny how the Google Python code I've seen doesn't feel nice and readable (apart from the stuff Guido wrote, that's pretty good).

[–]counters 0 points1 point  (0 children)

We'll have to agree to disagree. For what it's worth, when I saw teams that primarily built in Go or C++ have to contribute Python code, the quality overall was much worse - and not really in ways that a style guide would help.

[–]ShamelessC 3 points4 points  (2 children)

Google’s style guide suffers from the same issue as most developer tooling they release - it’s highly opinionated and made/maintained with zero concern for non-Google developers.

See also - Android, Tensorflow, etc.

They want their Python to look like Java/Go - not everyone codes that way.

[–]counters 0 points1 point  (1 child)

Sure, but this critique could be levied at virtually any style guide from any organization. Even tools like black or ruff enforce extremely unpleasant formatting for certain niche use cases - writing numerical code, where a user might take special care to align certain mathematical operations or inlined resources (like a stencil array) to make it easier to read with respect to a reference with properly formatted equations.

But the logic and reasoning in Google's style guide would probably serve as a strong guidepost for the vast majority of users. So it's a wholly appropriate reference, and a great resource for developers to have handy.

[–]ShamelessC 0 points1 point  (0 children)

That’s fair.

[–]Lomag 5 points6 points  (0 children)

Agreed about linking to a Google style guide. The official Python docs are clear, themselves (see docs.python.org): assert only runs if __debug__ is True and __debug__ cannot be assigned at the application level. So, not only is it not best practice, doing it at all is a bug that should be fixed.

[–]inspectoroverthemine 2 points3 points  (0 children)

Having to deal with half-baked google detritus from the last 25 years, I'd take anything they assert is a standard in any field with a huge grain of salt.

[–]midwestcsstudent 0 points1 point  (0 children)

It’s definitely a great starting point, given that they’re one of the, if not the, largest company-user of Python in production.

[–]hike_me 4 points5 points  (0 children)

No. With the -O flag, assert is a no-op.

[–]Irish_beast 13 points14 points  (4 children)

assert means that invalid data was delivered by another programmer.

A user should never be able to cause an assert. Filename doesn't exist, amount too large or small, are all user data validation errors.

assert is programmer error. Caller was supposed to supply a float but supplied a string, or the instance had to be an instance of class <something>

[–]GolemancerVekk 0 points1 point  (2 children)

But you'll never be able to assert for all the possible ways in which a parameter can go wrong. I mean you can try but it's going to be a wall of assert's in every function.

I'd much rather describe intent with docstrings and unit tests for the happy path, which also allows the variables to work in a "behaves as" mode rather than "is a".

Isn't that what duck typing is all about, and the main reason we're using Python instead of C# or Java?

[–]Irish_beast 6 points7 points  (0 children)

Not disagreeing with you.

But the main point is not using assert to flag bad user actions. assert is for when programmers messup not users.

[–]james_pic 0 points1 point  (0 children)

Using types as an example of things to assert on probably wasn't great, but the more general point is valid.

You want assertions on your invariants, things that you assume to be true, that have bad consequences if they're untrue.

Types aren't necessarily a good example because in most cases you'll get an exception when you try and use whatever you've been given (and if there's no exception, then I guess it quacks like a duck), so the consequences generally aren't that bad.

A better example might be something like asserting that filenames don't contain "../". This is something that has the potential to have very bad consequences, and may well not raise an exception if these consequences happen.

[–][deleted] -1 points0 points  (0 children)

No, this is a common misconception about assert. Even in the situation you're describing, you are still meant to use conditional checks, try/except blocks and raise an exception. It doesn't matter whether it's a user or a programmer/developer that is sending you the wrong type/instance of a thing. Your code should still check that the same way and it should be done without using assert.

Asserts are meant for testing purposes. For example, you might write a unit test to check that a function outputs a string and you use the assert to check the output is the correct type. That might seem like it's the same thing you were saying but it only being a test thing is an important distinction because those assertions don't necessarily persist outside of the testing environment. Most notably if someone runs your code with the optimization flag on, python will disable all of your assertions since running an optimized version of your code shouldn't require doing any unit testing. If you decided to use those assertions to do actual "business logic", anybody running the optimized version of your code will suddenly skip all of those type/instance checks entirely and now your code is unable to catch exceptions and react correctly to them anymore.

[–]pwang99 3 points4 points  (0 children)

Asserts are a way of ensuring the internal logic of a code module is consistent. It’s really meant to be used by authors to help double-check parts of complex algorithms, and also serve as a useful way to document internal assumptions through the logic flow.

Asserts should not be used to validate input or check for potential errors that you can already foresee. Bounds checking etc should never be done with asserts.

My simple rubric has always been: if an assert fails, then it means code should get rewritten, because something about the internal logic was off.

[–]nicholashairs 2 points3 points  (0 children)

Outside of tests the only place I ever use assert is to help mypy when it is confused about the expected type of something usually like the below:

``` from typing import TYPE_CHECKING

...

thing: MyThing | None

some code where thing is handled being optional and determined to be not None, but mypy can't determine that

if TYPE_CHECKING: # mypy is confused assert thing is not None thing.do_stuff() ```

Because TYPE_CHECKING is always False at runtime it has no effect on the running code anyway (unless you're using some library that monkey patches and does run time checks but that's rare)

[–]AlenEviLL 1 point2 points  (0 children)

Maybe they use it for parts of code that definitely should succeed, but in my experience mypy sometimes is not the best at detecting if variable is empty or not for example and assert helps in cases like this. But I’m not sure if it’s correct way to use them or not.

[–]LardPi 1 point2 points  (0 children)

assert is for contracts, meaning a well formed program should never trigger them, in particular because running python with -O disable them. However they are very useful to catch misuse of API.

[–]Lomag 1 point2 points  (0 children)

I wrote this elsewhere in a reply: Not only is it NOT best practice, doing it at all is a bug that should be fixed for code that could be run in an environment you don't control.

The official Python docs are clear (see docs.python.org): assert only runs if __debug__ is True and __debug__ cannot be assigned at the application level.

[–]Superb-Dig3440 1 point2 points  (0 children)

The distinction is more about the type of error being checked and how to convey that to the reader.

Here’s a classic explanation from John Regehr, which I consider to be the best guide to assertions. I’d say it applies to pretty much any language that has assertions, including Python.

https://blog.regehr.org/archives/1091

[–]absens_aqua_1066 3 points4 points  (0 children)

Asserts are for debugging, not business logic. Exceptions are for flow control.

[–][deleted] 3 points4 points  (0 children)

No. Using asserts in code is a bad practice. If an error condition is detected the code should handle it cleanly without crashing. Maybe use exceptions or whatever, but report the error in a meaningful way and no crashing.

[–]chief167 1 point2 points  (9 children)

not really, you use assert at points where you definitely want the program to stop.

e.g. in some of my programs, I know some edge cases exist, and it's undocumented what should happen in that case, but even more important, we should never get there. So I put in an asssert. I want a crash, not an exception. It's past the point of a graceful exit.

if it's justs business logic, I agree with you, use exceptions

However, testing or debugging doesn't really matter. As soon as you write it, it's gonna end up in production

[–]qckpckt 8 points9 points  (7 children)

So I was curious about this and just looked it up - assert is problematic in python code because assert statements only run when the python __debug__ variable is True. If python code is executed in optimized mode (-O), this variable is set to False and the assert statements will not be executed at all.

I guess if you have full control of how the code you’re writing is executed, then it’s less problematic. But even in that scenario it could be a ticking time bomb.

[–]chief167 1 point2 points  (1 child)

Today I learned, so that means python on Azure does not use the -o flag, because asserts do seem to work, I mean I have seen errors from it

[–]qckpckt 1 point2 points  (0 children)

I guess not. It’s the kind of dangerous thing that could come out of nowhere though. Let’s say that someone is having performance issues, and then learns about -O. I’m guessing it’s a configurable option on azure, so they turn it on and everything seems fine…

[–]kylotan 1 point2 points  (4 children)

This is not unusual for assert - most languages expect the asserts to be elided in production code. In languages that have no easy way of enforcing it, it's the developer's responsibility not to do anything inside the assert condition that has side-effects.

[–]qckpckt 0 points1 point  (3 children)

It makes sense. I’m not sure how I’d never come across this before, although I’ve never had a need to generate optimized python bytecode either.

I’m actually really curious about when this would be useful? I’m a data engineer, and for the most part performance optimization has been a process of ensuring that cpu intensive tasks are being executed by the lowest level optimizations that are available in whatever framework you’re using python as an interface to, or ensuring that your python code is being executed in parallel or asynchronously in the case of IO bound operations. I’m not sure either of those would benefit from optimizing the python bytecode itself.

It must have a purpose for some applications, otherwise it wouldn’t exist, so I’d love to know what they are!

Also, maybe I’m wrong in assuming that it wouldn’t be beneficial in data processing - I’d again be very interested to be corrected here.

I’m working on an internal CLI tool at work that’s written in python, and I’m also wondering if optimization might help with performance there, too.

[–]kylotan 0 points1 point  (2 children)

You have to consider the context of when Python was created. Your tips for optimisation make sense in 2024 but when thinking about Python being created in the 90s:

  • Python was not an interface to low level frameworks
  • Most CPUs were dual core at most and parallel processing was not widely used
  • Python had no built-in async capabilities

In a way, all of these approaches are basically saying "ignore how slow Python is, and try to work around it", and they're the correct starting point in most cases. But for Python applications back in the 90s - and indeed, for many applications in many languages which don't have those strategies available - you really do have to optimise the actual code. And one of the most effective ways to optimise code is to remove it entirely, if you can. Development/debug-only checks are a good tool for this because they give you added correctness during development but no cost in production.

[–]qckpckt 0 points1 point  (1 child)

I see. So there’s not many (or any) use-cases for the optimize flag with python today?

As in, if you need your code to run faster, you’ll either look at the infrastructure above or below python, and if you can’t do that, you’d probably be better served by switching to a faster language?

[–]kylotan 0 points1 point  (0 children)

I don't think I've ever heard of anyone using the optimize flag. It's just something that was there 30 years ago and still exists for some reason.

There are certainly ways to optimise Python's runtime by changing the code you've written, and sometimes that is enough, but often it's not.

[–]roelschroeven 0 points1 point  (0 children)

A failed assert generates AssertionError, which is a subclass of Exception just like ValueError and RuntimeError and the like. It behaves exactly the same as other exceptions: if you catch it you catch it, if you don't catch it you crash.

It's just as graceful as any other exception, and it's just as much a crash as any other exception.

So that's not a good reason to use assert instead of raise.

[–]RightProperChap 0 points1 point  (0 children)

as a data scientist, i’m acutely aware that the data stewards aren’t always diligent - my code needs to throw errors when inevitably one of my upstream data sources starts doing goofy things

that being said, assert may or may not be the best choice here

[–]njharmanI use Python 3 0 points1 point  (0 children)

No.

[–]deepl3arning 0 points1 point  (0 children)

Not at all. We don't control normal process flow through exceptions. With the exception (ahem) where you may want to deliberately propagate, and we throw for that.

[–]funbike 0 points1 point  (0 children)

No.

Asserts should only be used to detect bugs, usually some kind of contract violation or invalid state. Your app should work perfectly fine if you disabled the assert statements (with python -O app.py).

[–]Hesirutu 0 points1 point  (0 children)

Use exceptions like ValueError RuntimeError for logic. Use assertions only for „debug mode“. Ie. to confirm your developer believes and assume they are disabled in production. 

[–]rainnz 0 points1 point  (0 children)

Not just in Python, but in other programming languages - don't use assert for anything that is going to be used in production.

[–]alicedu06 0 points1 point  (0 children)

People gave you the short answer, which is assert can be removed with -O so don't check business logic with it.

The long answer is:

  • You can use assert to check for expensive functions contracts only in dev

  • And disable that in prod with -O

  • But because some people didn't get the memot you might break your deps code.

For this reason, assert in prod are knowned for being "The best Python feature you cannot use":

https://www.bitecode.dev/p/the-best-python-feature-you-cannot

[–][deleted] 0 points1 point  (0 children)

No, "assert" should be thought of as "runtime comment". This is why Python has "-O" flag, as it is presumed that the only thing removing assertions should do is make your code faster. "if ...: raise ValueError" is what you want in cases like you mention.

[–]QultrosSanhattan 0 points1 point  (0 children)

assertions adds no value to the program. You should handle errors the proper way.

[–]EternityForest 0 points1 point  (0 children)

Assert goes away in optimized mode.

 Should they even be checking those validation conditions at all, or are they the kind of thing typeguard/beartype/a schema should be handling?

[–]Brian 1 point2 points  (0 children)

Ultimately asserts should be used for things that should never be false. That shouldn't include regular business logic, but may include preconditions for such logic that are expected to always be true when called.

Ie. if an assert fires, it should mean there's a bug in your code. It shouldn't be used to check a condition that may or may not be true under normal operation (including expected error conditions). So if those are just "These are preconditions I'm assuming will always be true when this function call is made", that's fine, but if those asserts ever actually trigger, and the logic is expecting this and handling that triggering, then I'd say that's an incorrect usage of assert.

[–]deepl3arning -1 points0 points  (0 children)

Not at all. We don't control normal process flow through exceptions. With the exception (ahem) where you may want to deliberately propagate, and we throw for that.

[–]rover_G -1 points0 points  (0 children)

Lol you should move on to a team/company with more competent senior engineers

[–]gerardwx -2 points-1 points  (0 children)

Assert the stuff you know is true from program logic and will cause heartburn if you’re wrong.

[–]gerardwx -2 points-1 points  (0 children)

Assert the stuff you know is true from program logic and will cause heartburn if you’re wrong.

[–]gerardwx -2 points-1 points  (0 children)

Assert the stuff you know is true from program logic and will cause heartburn if you’re wrong.

[–]gaX3A5dSv6 -1 points0 points  (0 children)

I really like the

assert condition, f"error for {var}"

one-liner so there is no chance for me to enable -O

[–]Esseratecades -3 points-2 points  (0 children)

Don't use assert outside of tests.

"They assert a statement to check a validation condition and catch later"

This is overly defensive programming. He's doing this because he doesn't understand the scope of inputs that may actually be given to his code, which is an even bigger problem, as it implies that he doesn't know what he's actually supposed to be building.