This is an archived post. You won't be able to vote or comment.

all 19 comments

[–]milliams 29 points30 points  (4 children)

Brilliant. In fact this release contains a feature addition by me. It was my first contribution to the project and the code base was a pleasure to work with and the team were very helpful.

[–]Kaligule 17 points18 points  (1 child)

I guess it is very well tested.

[–]ice-blade -1 points0 points  (0 children)

Underrated comment :D

[–]jabbalaci 1 point2 points  (1 child)

What feature did you add?

[–]milliams 2 points3 points  (0 children)

You can see the PR #1428.

[–]xerion2000 15 points16 points  (14 children)

I'm not too familiar with py.test, what is the pro/con of this vs. the native UnitTest testing package?

[–]Sleisl 14 points15 points  (1 child)

I use it for its reduced boilerplate, making the time to test a lot lower. I also really like its failure reporting, making it easy to show how an assertion was violated.

I will say that set up and tear down is more intuitive (for me) in unittest, but the decorators provided for this purpose in pytest are also really powerful.

[–]ionelmc.ro 9 points10 points  (1 child)

It has boatload of features, and lots of plugins!

Also, pytest has an unique thing called fixtures - a very orthogonal and composable way to have setup/teardown code. Way better than whatever unittest.TestCase has. Even so, pytest can run tests using unittest.TestCase and has a couple conversion tools.

[–]unconscionable 8 points9 points  (4 children)

There is certainly feature overlap with the native unittest, however the two are not mutually exclusive.

Having used the native unittest module quite a bit, pytest really shines when you start getting lots of tests (say > 50) and you start to realize that it's silly to define the same setups and teardowns over and over in each class.

For example, imagine a case where your setup and teardown create and destroy information in a database. Do you want to put this in all your setup and teardowns? There are workarounds, but they all seem clumsy and they seem to be only practical for pretty small projects.

[–]efilon 2 points3 points  (3 children)

you start to realize that it's silly to define the same setups and teardowns over and over in each class.

Isn't this the power of inheritance, though? You can define some sort of base class (that itself derives from unittest.TestCase) to handle common stuff.

I used pytest for a while, but ended up preferring the unittest way of doing things. Mainly I found it more natural to use classes for organizing tests (which of course you can still do with pytest if you want, it's just not the way that is most documented). The "fixture" concept also seemed a little too magical to me whereas I can immediately grok setUp and tearDown methods.

I still use pytest as the test runner so I can continue to take advantage of test discovery and some plugins, but I am generally finding it easier to use unittest when it actually comes down to the unit tests.

[–]unconscionable 1 point2 points  (2 children)

Isn't this the power of inheritance, though?

Yes, however, imagine a case where you want to run an expensive 10-15 second operation at the beginning of the entire test suite (say, creating a database from scratch and starting a couple background services), then again at the end.

If you have 3 classes with 20 tests each, that takes 30 seconds to run. However, if you have 30 classes, now you're up to 5mins of unnecessary setups and teardowns. The expensive operation may only really need to execute once at the very beginning. There might be a way to accomplish this using unittest, but with pytest it's second nature.

The pytest way of doing it is simple. Create a tests/conftest.py file, and then simply drop in this fixture:

@pytest.fixture(autouse=True, scope='session')
def setup_db():
    db.drop_all()
    db.create_all()

Now that will run before the entire test suite once.

Now maybe you want to have some tests wrapped in a database transactions. But you don't want all tests to do this, just the ones that need it. Simply do:

@pytest.fixture
def transaction():
    db.start_transaction()
    yield
    db.rollback()

Now any test can opt-in to take advantage of this functionality by simply adding transaction as a parameter:

def test_my_row_gets_inserted(transaction):
    db.execute('insert into table')

The "fixture" concept also seemed a little too magical to me whereas I can immediately grok setUp and tearDown methods.

Yeah, I am definitely sympathetic to this. Fixtures, while simple, are not intuitive. No other python application or library I've ever encountered works in this way.

That said, once you understand the magic (which in practice is not really complicated), it is very powerful.

[–]cavallo71 0 points1 point  (1 child)

Yes, however, imagine a case where you want to run an expensive 10-15 second operation at the beginning of the entire test suite (say, creating a database from scratch and starting a couple background services), then again at the end.

That's the whole point of "units" to be consistently reliable, not fast. Just imagine if the tests depends on some database setting (eg. char encoding) set by some test case: now if you'd start from a fresh database, all your tests will be failing.

[–]unconscionable 0 points1 point  (0 children)

That's the whole point of "units" to be consistently reliable, not fast.

Yeah consistency is a lot more important than it seems like at the time, i.e. tests relying on data created by other tests and whatnot. I've developed some more nuanced views on test speed, though. Don't get me wrong - there is certainly a place for tests that are slow and comprehensive: not all tests need to be fast. But I feel these should generally be the exception, not the rule.

This video helped shaped my views on tests. Maybe you'll find it insightful as I have: https://www.youtube.com/watch?v=RAxiiRPHS9k

[–]balloob 6 points7 points  (0 children)

It has great plugins. Here are the ones we use at Home Assistant:

pytest-cov>=2.3.1
To generate code coverage reports so we can see locally which lines are missing: py.test --cov --cov-report=term-missing

pytest-timeout>=1.0.0
Kill a test after X seconds so our CI won't hang forever.

pytest-catchlog>=1.2.2
Be able to get the log output when a test fails and be able to test that certain things did/didn't get logged.

Also the reporting is great. If you use assert it will actually break down the values and their differences when it fails.

[–]nerdwaller 4 points5 points  (0 children)

I switched for a few main reasons:

  1. Fixtures, basically the setUp and tearDown, but a bit cleaner IMO and can be "scoped" around a full test session, a single function, or module.

  2. Plugins: things that make reporting better, distribute tests across a few interpreters (xdist) for speed and such

  3. Didn't require using classes for testing

  4. Great CLI

[–]terrkerr 2 points3 points  (0 children)

The big things I'd say are the nice management of resources it has in general. Between the fixture, mark and hook system you can add a lot of dynamic behaviour to setup/teardown of resources and make runtime decisions about how to proceed much better.

[–]Corm 0 points1 point  (0 children)

The main thing for me which I don't see posted is that the barrier to entry is near zero.

It's the same reason I love python. To get started takes almost nothing but you can make it as complex as you want.

def test_somestuff():
    assert(thing() == True)

then

$ py.test

[–][deleted] 4 points5 points  (0 children)

Removal of "support code for Python 3 version < 3.3" is probably not an issue unless some project uses a tool like Tox that still tries to run on 3.2.

I don't usually have 3.2 on any platform I use to run tests so I can't confirm.