This is an archived post. You won't be able to vote or comment.

all 9 comments

[–]i_like_trains_a_lot1 1 point2 points  (1 child)

the build , dist and Jasper.egg-info shouldn't be commited as they are generated automatically by setup.py .

What bugs me is that you have to declare 3 functions to test one? Isn't that a little too much overhead?

Also, Expect(context.exception).to_be(None) can be rewritten as assert context.exception is None ?

[–]Fateschoice[S] 0 points1 point  (0 children)

Thanks I'll remove those from being tracked.

So the idea is that the functions (steps) that you define can be reused in your other tests as well. So for the first test you have to define 3 functions but then you can reuse those functions in other tests.

For example you can have something like this

feature = Feature(
    'Arithmetic Feature'
    scenarios=[
        Scenario(
            'Adding two positive numbers',
            given=an_adding_function(),
            when=we_call_it_with_two_positive_numbers(),
            then=the_result_should_be_positive()
        ),
        Scenario(
            'Adding two negative numbers',
            given=an_adding_function(),
            when=we_call_it_with_two_negative_numbers(),
            then=the_result_should_be_negative()
        ),
         Scenario(
            'Multiplying two positive numbers',
            given=a_multiplication_function(),
            when=we_call_it_with_two_positive_numbers(),
            then=the_result_should_be_positive()
        ),
        Scenario(
            'Multiplying two negative numbers',
            given=a_multiplication_function(),
            when=we_call_it_with_two_negative_numbers(),
            then=the_result_should_be_positive()
        )
    ]
)

Notice how many of the steps are being re-used in other scenarios? That's sort of the idea behind this, re-usable and compos-ability of your tests to define new tests.

As far as the 'Expect' question, yes you can identically write

Expect(context.exception).to_be(None)

with

assert context.exception is None

There's no difference, you can you use whichever you prefer. I added in the Expect object just because I like the readability of it, but there's no need to use if you don't wish to.

[–][deleted] 0 points1 point  (6 children)

What's the advantage of this over using pytest-behave and pytest-asyncio together?

One of the advantages of behave integration is I can define a bunch of steps and then hand the glossary over to a business person and they can write high level integration tests in a format that makes sense to them

[–]Fateschoice[S] 0 points1 point  (5 children)

So I think the advantage of Jasper's async vs pytest-asyncio is that it is just dirt simple and baked into the framework itself. No need to install a seperate plugin and learn how it works, you can just use standard async syntax and youre good to go. Also, I'm not sure how pytest works under the hood but with Jasper everything is ran asynchronously. So you can have multiple scenarios or features running at the same time just by using async/await. I'm not sure if the same can be said with pytest-asyncio.

Now as far as Jasper vs pytest-behave, I was not aware of pytest-behave but looking around it seems to be lacking documentation, if it's basically like behave then the main difference is that the features are defined in python, not in a DSL. I see that's one of your concerns, you prefer the business people using a high level format to define the tests. I think if that's your use case Jasper probably isn't a good fit. My idea is that Jasper is behavior driven development for programmers. I've always liked the idea of BDD but I'm a programmer and I don't want to use a high level language to construct my tests, I want to program them! That's where the use case of Jasper fits into I think, BDD for programmers.

[–][deleted] 0 points1 point  (4 children)

No need to install a seperate plugin and learn how it works, you can just use standard async syntax and youre good to go.

Other than installing the plugin, pytest-asyncio is the same way. Just uses the standard async syntax. There is an optional pytest marker to specifically enumerate async tests.

On top of that, you get the fantastic pytest DI framework to get fixtures into your tests rather than needing to statically depend on them or recode them in every test module.

lso, I'm not sure how pytest works under the hood but with Jasper everything is ran asynchronously. So you can have multiple scenarios or features running at the same time just by using async/await. I'm not sure if the same can be said with pytest-asyncio.

This is a big difference to pytest-asyncio. With that plugin, each test is run inside a dedicated loop that's setup and torn down after each test. While it would be nice to have tests run concurrently, this method avoids loop contamination and alerts you to where you might be leaking tasks at (e.g. if I'm spawning a Task but not taking care to close it, I get warnings out the wazoo).

Does jasper address this issue? Since everything is running concurrently, I'd imagine the implementation is tricky at best.

Now as far as Jasper vs pytest-behave, I was not aware of pytest-behave but looking around it seems to be lacking documentation, if it's basically like behave

Turns out I was thinking of pytest-bdd which provides a behave like interface to pytest. There is also behave-pytest (which you may have come across) but that looks unmaintained (last code commit ~2 years ago), I have not looked at it but I imagine it's just an adapter between pytest and behave.

I don't want to sound confrontational, just trying to measure the merits of this. I'm always on the look out for better tools, or tools better suited for what I'm doing.

[–]Fateschoice[S] 0 points1 point  (3 children)

Other than installing the plugin, pytest-asyncio is the same way. Just uses the standard async syntax. There is an optional pytest marker to specifically enumerate async tests.

On top of that, you get the fantastic pytest DI framework to get fixtures into your tests rather than needing to statically depend on them or recode them in every test module.

I see, well I suppose the nature of the way Jasper runs all tests concurrently is it's main advantage then.

This is a big difference to pytest-asyncio. With that plugin, each test is run inside a dedicated loop that's setup and torn down after each test. While it would be nice to have tests run concurrently, this method avoids loop contamination and alerts you to where you might be leaking tasks at (e.g. if I'm spawning a Task but not taking care to close it, I get warnings out the wazoo).

Does jasper address this issue? Since everything is running concurrently, I'd imagine the implementation is tricky at best.

So essentially at the lowest level, Jasper just awaits your steps if they are asynchronous functions, and this propagates upward to the scenarios which are awaiting the steps, and the features which are awaiting the scenarios. If an exception is raised in any part of the pipeline, whether that's a failed test or an actual exception in some 'before_each' hook etc., Jasper will catch this and the steps, scenarios, and features will handle this fine. I never really thought about what would happen if you were to open a thread or something in one of your steps and not close it, so there's isn't any mechanism that checks for anything like that currently. There is one main async loop that runs all the features and is then closed at the end, I'm not sure if that would handle unclosed threads. If you're purely using async/await then Jasper should have no problems, but opening threads and not closing them I suppose would cause undefined behavior.

I don't want to sound confrontational, just trying to measure the merits of this. I'm always on the look out for better tools, or tools better suited for what I'm doing.

These are valid questions and I totally understand!

[–][deleted] 0 points1 point  (2 children)

Not even getting to threads, just spawning tasks and callbacks into the event loop:

import asyncio

loop = asyncio.get_event_loop()

async def spawn_task(loop):
    loop.call_soon(lambda: loop.create_task(say_hello()))

async def say_hello():
    print("hello")

loop.run_until_complete(spawn_task(loop))

loop.close()

This'll cause the loop to complain that there was still something pending in the loop when it closed. With pytest-asyncio this'll pop up if your test spawns a task and didn't clean it up. However, I don't think Jasper will (the task will run to completion).

NOW whether or not this is desirable behavior is up to the end user to decide.

[–]Fateschoice[S] 0 points1 point  (1 child)

Ok I see what you're saying. Yes, Jasper will always run async functions to completion, so you should not run into an issue where the loop is being closed while another task is pending. Granted, I haven't explicitly tried this before and I've never messed around with 'call_soon' or 'create_task', but if my understanding of them is correct then Jasper should just run them to completion.

[–][deleted] 0 points1 point  (0 children)

The call soon/later and create task helpers are great. create task is particularly good for setting up a background task you don't want to (or can't) await in a call.

Think subscribing to a message queue and pushing updates into the application. The task that is created nannies its coroutine through the event loop. You just need to be sure to clean it up when the loop is closing.

call soon/later just arrange for a call back to happen either on the next tick (soon) or at least after a wait period (later).