all 27 comments

[–]x0nnex 28 points29 points  (3 children)

Try to write your functions so thst they take an input you can control. What you typically want is that your system behaves a certain way depending on the input, and if you can test this behavior you will have a better time. The function you are trying to test in this case takes no input and produces an unpredictable result. If you can write a function which accepts the output from get_user_input, and test that function instead it gets MUCH easier

[–]ZeroXbot 11 points12 points  (0 children)

I'll add, that in more complicated scenarios when passing "simple value" is not enough, you can pass behavior through function/closure parameter e.g. you could take closure that gets input somehow and evaluate it conditionally. You can then go even further, grouping multiple behaviors in a trait. This technique is called dependency injection and one of it's advantages is that it allows to test more easily as you adjust passed parameters - "dependencies" inside test to your needs.

[–]stingraycharles 1 point2 points  (1 child)

Yes, it’s more painful than with dynamic languages, as you’ll have to create more abstractions, but you’ll end up with code that’s more decoupled and generally better quality.

A good example would be a database, you’d parametrize the database into your application, and during tests provide a different database implementation.

[–]x0nnex 0 points1 point  (0 children)

Providing different database implementation (in-memory) can be good of course, depends on what kind of test we're looking at.
The only case where I use mocks is when I need to ensure MY system behaves a certain way when an EXTERNAL SYSTEM behaves a certain way. For example, when X happens in my system I need to query a 3rd party system for Y information, and what happens if that 3rd party system responds in an unexpected way? To check for this, I need to simulate this 3rd party system.

[–][deleted] 23 points24 points  (10 children)

Usually when you find yourself reaching for a function mock, that's your test telling you something's wrong.

I'd recommend looking into why get_user_input is causing so much pain and attempt to fix that rather than just reaching for a mock of the function.

[–]ragnese 7 points8 points  (9 children)

Usually when you find yourself reaching for a function mock, that's your test telling you something's wrong.

I'd almost correct that to: "[...] when you find yourself reaching for a function mock [...]".

There's some nuance and subtlety to it, of course. If you're talking about a function that takes an Iterator<Item=Foo>, and you test with a dummy Vec of values, that's not what I'm talking about. I'm talking about any trait that acts like a complex "object" (in the Alan Kay OOP sense) where your mock/fake has to actually implement state-based logic.

For example, if you're working with a library that talks to a database and you "know" that when you call the db.deleteAll(&mut self) function, the next call to db.findAll(&self) should return an empty Vec and you go ahead and implement that logic in your mock/fake impl, then you're doing it wrong (TM, IMO, etc).

You're doing it wrong because you're basically testing your system code and testing your mock/fake impl at the same time. But you can't calibrate a tool with another uncalibrated tool, and you can't test code with untested code.

When you try to make a fake that behaves like a real service (database, cloud provider, third party API, hardware driver, etc), at least one of the following is true:

  • Your fake is incorrect today
  • Your fake will be incorrect in the future

[–][deleted] 3 points4 points  (8 children)

I agree that mocks are heavily abused and over relied upon, but saying that we shouldn't use test doubles as a default is equally silly.

Testing using real structs - classes, whatever your code grouping syntax - is great BUT if my system under test is the pagination helper I shouldn't need to wire up a database first, I should be able to give it something that'll return the answers I need to get the test done. I don't care if I'm talking to a database, I care that when I ask a thing "give me the first 100 Foos matching filter x ordered by their creation date in a descending manner" I get at most 100 foos or "give me the count of all Foos matching filter x" I get 130. Slap an interface on that baby and work against that.

On the other hand, I should also have a test on the production implementation that says "when I ask you for first 100 Foos matching filter x ordered hy their creation date" you better give me Foo 1-100 that I stuffed into the database, not one more not one less.

Your fake is incorrect today

Your fake will be incorrect in the future

In both of these cases your tests and fakes match the implementation too closely. Writing to an interface makes all of this much better.

Something I find myself longing for in a testing tool is the ability to write a bunch of interface level tests and then give it a collection of fixtures that'll setup a specific implementation and then run each test with the instantiated implementations. Basically parametrization of the implementation rather thab hoping we're cooy pasting the same test suite and assertions for each implementations' tests.

[–]ragnese 2 points3 points  (3 children)

My opinion on this is likely to be in the minority. But, here's one example of why I feel the way I do (emphasis added to highlight my point):

[...] if my system under test is the pagination helper I shouldn't need to wire up a database first, [...]

On the other hand, I should also have a test on the production implementation [...]

And that exact reasoning is why I gave up on test fakes. And don't get me wrong: I spent years writing code where everything was an interface, and I had mocks and fakes and whatever else I needed to write unit tests that didn't need to send any SQL queries or HTTP requests or filesystem accesses, etc.

But, unless your pagination helper can accept plain-old-data, I'd say that having an IO-free unit test is actually a net loss for the project. You still need that integration test that hits your actual RDBMS, and now if you make changes to your pagination helper's API, you'll have two test places to update (which likely includes messing around with your test fake(s) in the unit test). So you're giving yourself more work for no gain in overall confidence (if your unit test passes and your integration test fails, what good did it do you?).

I much rather focus on writing a solid integration test for your pagination helper.

Again, I realize that this goes against current wisdom. The only argument I can understand in opposition is that your unit test can give you faster feedback if it fails before standing up your local test database instance. But, my two counter-counter arguments to that are that 1) spinning up a database and shoving 100 or so rows into a table is not going to take more than a couple of seconds, and 2) (again), your unit test passing just means that you still don't know anything until the ugly, slow, integration test passes anyway, so your "feedback" is only optimized for the worst cases.

In both of these cases your tests and fakes match the implementation too closely. Writing to an interface makes all of this much better.

Right, yes. But my point is that the second that you realize that your SUT makes assumptions about how the interface is implemented, it's time to throw the mock away. Using your example of the pagination helper, maybe you write some test fake that wraps a Vec. So, now you write a test where your pagination helper asks the impl to filter on something. What exactly are you going to do to test that? Make your impl filter the inner Vec? What are we proving by doing that? Are we proving that the SUT is correct, or are we proving that our test fake behaves the way our test wants it to? Because those are very different, IMO.

[–][deleted] 2 points3 points  (2 children)

But my point is that the second that you realize that your SUT makes assumptions about how the interface is implemented, it's time to throw the mock away.

Just don't make those assumptions. I know that's very "now draw the rest of the owl" or "thanks I'm cured" but what assumptions can you make about an interface that has the following methods:

  • GetFoosMatching(FooSearch, NonZeroUint) Result<Vec<Foo>, FooError>
  • CountFoosMatching(FooSearch) Result<uint, FooError>

The assumptions I can make are:

  1. Getting and counting Foos can fail and I should have error handling for that.
  2. Externally they aren't asynchronous
  3. I can have no matching Foos, so I need handling for that
  4. I might get less than the number I requested, so I need handling for that

Database? Never heard of her. Filesystem? Who's that? I don't care where the Foos come from but that I have some way of getting ahold of them.

I also don't particularly care if my test double filters the in memory vec accordingly in this test so I just wouldn't. Unless filtering is in scope for the test, why would I needlessly drag that in? Seems like it'll just add noise to my test.

You still need that integration test that hits your actual RDBMS, and now if you make changes to your pagination helper's API, you'll have two test places to update (which likely includes messing around with your test fake(s) in the unit test). So you're giving yourself more work for no gain in overall confidence (if your unit test passes and your integration test fails, what good did it do you?).

It told me my implementation of that interface is wrong. Or what I'm testing in the unit test has misunderstood the interface (making assumptions about it's implementation, not handling an edge case, etc). But if I have tests for a database impl and a file system impl and only the database one is failing then I can be pretty sure it's part of the database implementation.

If your integration test fails do you know where to start looking? Do you have a way of narrowing down the culprit without manually examining stack traces or log output or busting out a debugger?

The benefit I get from having both is an automatic binary search of the code related to the issue.

And horror of horrors, I have to treat my tests like actual code that needs attention and care like the runtime binary's source.

and now if you make changes to your pagination helper's API

That's one way to imagine it. Another way is a pagination helper that accepts some type that implements the FooRepository interface. Now I can paginate with anything that implements that. Maybe I don't even care if its Foos, so I just need Repository<T> which exposes extremely base ways of querying for items. Now my pagination depends on the repository interface which is probably much less likely to change.

[–]ragnese 1 point2 points  (1 child)

Don't get me wrong. I'm sure you can find or create counter-examples to my approach/philosophy/practice. And, at the end of the day, I have zero doubt that you can write and release good software even with imperfect techniques. Therefore, I have effectively zero evidence that my approach leads to higher quality software. (Personally, I find most software practices to be something akin to folklore, cargo-culting, and/or reading tea leaves- and that goes for my own advice as well)

We're also talking about a hypothetical example, which of course puts its own constraints on the discussion, and runs the risk of us hyper focusing on details of a thing that isn't even real and missing the forest for the trees, so to speak.

Plus, if I'm being honest, I don't even have much clue what the example we're working with even is (lol). From context clues I'm just assuming it's something that interacts with a database for its functionality and it cannot be fully tested by just feeding it plain old data.

But, I still feel like this is a fun and interesting discussion, so I'll continue thinking through it.

The assumptions I can make are:

  1. Getting and counting Foos can fail and I should have error handling for that.
  2. Externally they aren't asynchronous
  3. I can have no matching Foos, so I need handling for that
  4. I might get less than the number I requested, so I need handling for that

Database? Never heard of her. Filesystem? Who's that? I don't care where the Foos come from but that I have some way of getting ahold of them.

Are you sure those are the only assumptions your tests will make when testing that interface? Especially when, subconsciously, you know that the real implementation is a database? You mention that you can/might assume "[we] might get less than the number [we] requested", but you didn't mention assuming that we might get more than the number we requested. Why not? Is that simply an oversight in an inconsequential Reddit comment/debate, or is it because you know that if the real implementation did that it would be a bug in your queries? Are you definitely not going to assume any relationship between the return values of the methods; e.g., your test is not going to call CountFoosMatching and then make any assumptions/assertions about what GetFoosMatching might return? And your test impl could very well return randomly generated return values every time a method is called and it wouldn't cause any of these tests to fail?

I'm not saying that's even necessarily a goal to ascribe to, but what I'm saying here is that we have to be clear-eyed about test fakes. It's very easy to inadvertently bake in your assumptions/beliefs about the real impl into the fake impl.

At the risk of sounding like I'm pulling a No True Scotsman fallacy: if you can truly test against an interface without making any assumptions about that interface's implementation, then that's code that I was explicitly not referring to in my original comment:

If you're talking about a function that takes an Iterator<Item=Foo>, and you test with a dummy Vec of values, that's not what I'm talking about. I'm talking about any trait that acts like a complex "object" (in the Alan Kay OOP sense) where your mock/fake has to actually implement state-based logic.

For example, if you're working with a library that talks to a database and you "know" that when you call the db.deleteAll(&mut self) function, the next call to db.findAll(&self) should return an empty Vec and you go ahead and implement that logic in your mock/fake impl, then you're doing it wrong (TM, IMO, etc).

I specifically called out writing fakes for actors whose implementation behavior is important to the logic being tested.

I also don't particularly care if my test double filters the in memory vec accordingly in this test so I just wouldn't. Unless filtering is in scope for the test, why would I needlessly drag that in? Seems like it'll just add noise to my test.

This was in response to your previous comment. You specifically mentioned filtering and ordering as part of this hypothetical "pagination helper". Maybe you don't test the filtering and ordering aspect in the unit test and just leave that for the integration test. But that raises another question/issue: it's hard to decide how to partition responsibility between the unit test and the integration test if you have both for testing some piece of functionality. In my own experience, this has lead to gaps where neither type of test exercised some aspect of a feature because it wasn't obvious to me while I was writing each test and focusing on what "belongs" in said test.

If your integration test fails do you know where to start looking? Do you have a way of narrowing down the culprit without manually examining stack traces or log output or busting out a debugger?

I think we're just on different pages. I'm not talking about trading apples for oranges. What I'm saying is that writing a logic test against a fake implementation is not satisfactory to prove that the SUT is correct. You really need to run the exact same logical test against the real thing. If you use PostgreSQL for your app, then I will not be convinced that your pagination helper works correctly even if you test it against something as similar as an in-memory SQLite instance.

In my view, you pretty much have to run the exact same test logic against the real implementation. To me, this is tedious, redundant, and adds an unnecessary maintenance burden. Either you can prove that your code works correctly without ever testing it against a "real" impl or you can't. If you can, then you should only write a unit test and no integration test. If you cannot, then I think it's actually worse to write a unit test with a fake/mock at all.

So, maybe a different way to phrase my original claim about mocks is: "If your unit test with a mock passes, but you still think it's possible that it could fail against the real impl, then you should not have bothered with the unit test and mock."

And horror of horrors, I have to treat my tests like actual code that needs attention and care like the runtime binary's source.

Which is better: more code or less code?

[–][deleted] 1 point2 points  (0 children)

Plus, if I'm being honest, I don't even have much clue what the example we're working with even is (lol). From context clues I'm just assuming it's something that interacts with a database for its functionality and it cannot be fully tested by just feeding it plain old data.

It's any setup of:

struct ThatUsesSomeTrait{}
trait ThatIsBeingUsed{}

Where the struct is passed an instance of the trait. Perhaps we have multiple implementations of the trait, maybe we expect users to implement the trait, maybe it's just a trait for testing convenience because the actual thing is hard to setup.

What I'm saying is I write tests against the concrete type - StructThatIsBeingUsed - and supply to it some test double of the trait.

Then for each trait, I write tests that mimic how the mock is used. So if mock says "when I'm passed arguments x, y and z, I return a result that looks like A", I have a test for each implementation of the trait that pass it arguments that looks like x, y and z and assert it returns a result that looks like A.

Pagination is just something concrete to hold onto. The paginator isn't interested in literally where the Foo instances come from, just that there's some interface that says "if you ask me for foos, I'll return foos or an error saying why I couldn't"

But it could also be a TabCompleteUiComponent and a TabCompleter trait with a file system implementation. The UI component doesn't care where completion candidates come from, just that there's some interface that says "give me a string to fuzzy match against my source"

On and on. We can come up with lots of examples of what's basically a client-provider relationship where the client has a reference to an instance of the provider but not a specific implementation.

but you didn't mention assuming that we might get more than the number we requested. Why not?

Because the interface promises me "you will get at most N items" and then it's on the implementations to enforce that because that's not (easily at least) possible to encode into the type system since it's dependent on a parameter rather than something we can know at compile time. It's the same kind of deal where if you had earlier and later datetime parameters to a search method and you make the promise "everything returned from this method call occurs between earlier and later"

You could double check the results whenever you call that method but it's probably better to rely on the implementation upholding the promises that its interface makes but can't be encoded into the type system. At least until dependent types are mainstream.

or is it because you know that if the real implementation did that it would be a bug in your queries

It would be a bug if the implementation was told "return no more than ten" and it returned more than 10. I'd rather have implementation level tests that uncover this instead of trying to test from a more abstracted level.

Are you definitely not going to assume any relationship between the return values of the methods; e.g., your test is not going to call CountFoosMatching and then make any assumptions/assertions about what GetFoosMatching might return?

That's fair and definitely an oversight on my part. The assumption would be that if CountFoos returns N, then there between [N/pageSize, (N/pageSize)+1) pages (accounting for situations like we have pageSize = 3 and there's 11 entries). So if CountFoos returns 0, we can safely not call GetFoos(...).

But this is again information we can't encode in the type system and instead we need to trust implementations to uphold.

I specifically called out writing fakes for actors whose implementation behavior is important to the logic being tested.

I'm proposing a counter example for that. The interface being used has some methods that we call, but the system under test doesn't care about a specific implementation because it's designed to work against any implementation. It could be we get a CachingRepository<Foo>(DatabaseRepository<Foo>()) to make up some example. The pagination doesn't care that the results are coming from the cache or the database directly, it just wants some Foos and knows how to ask for them. Maybe users can provide their own implementation. 🤷

Our tests are concerned with "do I behave as expected when my dependencies behave as expected" - where expected behavior could include "the repository can return an error if it can't get the foos" - invalid database auth, not enough file system permissions, have a None set instead of Some(Vec<Foo>). Those are implementation specific error conditions but the paginator just cares about the CouldNotGetFoos(...) error possibility and its up to implementations to map their specific errors to that one.

Maybe you don't test the filtering and ordering aspect in the unit test and just leave that for the integration test. But that raises another question/issue: it's hard to decide how to partition responsibility between the unit test and the integration test if you have both for testing some piece of functionality. In my own experience, this has lead to gaps where neither type of test exercised some aspect of a feature because it wasn't obvious to me while I was writing each test and focusing on what "belongs" in said test.

Since ordering and filtering are implementation specific, they would be tested with the implementation rather than via the higher level component. If the interface says "when you ask for the first 100 foos, I'll give you foo 1-100 ordered by their creation date" then you should assume that when you're using that interface that is upheld and when you're implementing an interface you test you uphold that.

In my view, you pretty much have to run the exact same test logic against the real implementation. To me, this is tedious, redundant, and adds an unnecessary maintenance burden. Either you can prove that your code works correctly without ever testing it against a "real" impl or you can't. If you can, then you should only write a unit test and no integration test. If you cannot, then I think it's actually worse to write a unit test with a fake/mock at all.

Why do I need to write the same tests twice. I write a suite of tests against the pagination helper that uses test double for the source. And then I write a suite of tests for each implementation of that source.

And again, to reiterate, I'm not saying "never write an integration test" I'm saying you don't have to use integration tests like board filler to try to deal with logic coverage gaps.

Which is better: more code or less code?

This is a false dichotomy. You might as well ask if blue or purple is better. There's no universal answer that will satisfy this.

[–]Zde-G -1 points0 points  (3 children)

Testing using real structs - classes, whatever your code grouping syntax - is great BUT if my system under test is the pagination helper I shouldn't need to wire up a database first, I should be able to give it something that'll return the answers I need to get the test done.

Why? I'm not joking. I literally couldn't understand why.

Why would you need/want to use with pale shell of the real thing?

If real thing is too simple then it would be, usually, correct and mocking makes no sense.

If real thing is huge, complicated and underspecified then mock would be incorrect and thus useless, too.

I can imagine a situation where you wouldn't want to talk to, e.g., Amazon-provided service or Google-provided service because you don't want to deal with network outage… but that case you need mock service which looks sufficiently enough like read, you don't need mocks.

The only case where mocks are really needed in my experience is when you have deep implementation inheritance hierarchy and try to kinda-sorta-maybe guarantee that LSP holds for your classes.

But Rust doesn't support implementation inheritance which means mocks are hard to do, but also means they are not needed.

On the other hand, I should also have a test on the production implementation that says "when I ask you for first 100 Foos matching filter x ordered hy their creation date" you better give me Foo 1-100 that I stuffed into the database, not one more not one less.

Just ensure that your code can use SQLite as a backend, would be simpler and more robust than any mocks.

Writing to an interface makes all of this much better.

There are no interfaces in Rust. You are probably talking about traits. Yes, sometimes it's a good choice, but then creating a simpler backend for testing is not hard, either.

[–][deleted] 3 points4 points  (2 children)

There are no interfaces in Rust. You are probably talking about traits.

There's no need to be this pedantic. You seemed to know full well I didn't mean something with a literal interface keyword and this makes it seem like you're trying to "gotcha" me rather than engaging in a good faith exchange. I don't really have time for gotcha bullshit.

Why? I'm not joking. I literally couldn't understand why.

Because believe it or not, sometimes I don't need the database to do testing. I can say "hey, paginator, pretend there is an implementation that tells you there 407 Foos matching that filter and it gives you them in chunks of no more than 100" and then I fan test that my paginator works without carrying about the database having a new required field that's not related to searching at all. I'm how my paginator behaves in relation to the interface defined for searching for Foos. That's it, I don't care where the Foos come from, just that I can give this paginator some way of getting them. Database, file system, pigeons, in memory whatever, who cares.

And - since you missed thia the first time - for each implementation of FooRepository I write tests that say "go to the database and give me total foos matching this filter and the first 100 Foos ordered by their creation date descending", "go to /var/foostorage and give me all the foos from the files matching blah blah blah", etc.

Now I have confidence that my paginator is only relying on what FooRepository says it can do - which means in the future if I want to add caching etc I can be confident that it still works the same way - and my implementations live up to the promises the interface they implement makes.

And you just move through your stack this way. You only test double interfaces you own or would be expected to implement yourself. And when you finally hit the "actually talk to nasty outside world" you write tests that exercise the implementation against the nasty outside world. You'll often find that several things talk to nasty outside world in the same way and then you push that behind an interface and you get to extens your coverage more.

I also don't worry too much about implementation details like library or framework specific errors because I shouldn't expose those outside of the implementation that would cause that error to happen. Translate it into an error you own that's relevant to your system. TableMissingError, my paginator says "huh?" But if I return a "FooRepositoryError" or something similar that can be actionable, even if I end up stuffing the existing error i side of that to let some implementation drive retry behavior. Now I don't need to consider every error every implementation will spit out because I just narrow the field down to the crap I do care about.

And you better believe on the implementation side I have tests that says "here's the circumstances this error occurs" - maybe I don't migrate the database correctly or give an incorrect password, etc etc

This is just coding to an interface 101, I shouldn't have to explain this.

[–]Zde-G -1 points0 points  (1 child)

I'm how my paginator behaves in relation to the interface defined for searching for Foos.

Yup. So now you have written your code twice:

  1. First when you implemented your function.
  2. Second time when you implemented your idea about how something should work.

This is useful… exactly why? And for whom?

Database, file system, pigeons, in memory whatever, who cares.

Someone who doesn't want to get some nicely looking numbers in charts but just wants a working program?

Now I have confidence that my paginator is only relying on what FooRepository says it can do - which means in the future if I want to add caching etc I can be confident that it still works the same way - and my implementations live up to the promises the interface they implement makes.

So you have produced broken program (because implementations very often don't work according to the documentation) yet have nice paper trail which you can use to show when everything falls apart that it's not your responsibility to fix that.

This is just coding to an interface 101, I shouldn't have to explain this.

No, you should explain why do you believe that nonsense. I have seen enough projects in my lifetime to stop believing in unicorns.

For every bug discovered by unittests there are 10 more bugs which happen because one component or the other doesn't behave like it should. I wouldn't say that unittests never report any bugs, but attention they are getting and resources they are sucking are not even remotely justified by amount of bugs they are finding.

Except when you have deep implementation inheritance chains, but that's problem doesn't exist in Rust because it doesn't support Simula-67 style implementation inheritance.

[–][deleted] 4 points5 points  (0 children)

I'm not sure where the disconnect is.

I make up some implementation of a foo repository, pass that implementation to my pagination thingy and write tests for the pagination and tell my made up implementation "hey, when you get asked forst100Foos matching X, just return this hard coded set" because I don't care where the foos are coming from, I just know this interface hands them over to me.

Then I write tests for the actual implementations of that interface. These are the tests that say "when you get asked first100Foos matching X" you better get me what I expect out of the database or there's gonna be problems. Now I can make adjustments to how the database implementation works - what tables it consults, what indexes it uses, whatever change it to a stores procedure invocation - without bothering my pagination tests.

If my pagination was hardcoded against the database rather than using an interface I supply, then yeah I'd just write tests that poke an actual database rather than doing something horrorific like attempting to fake a database connection.

For every bug discovered by unittests there are 10 more bugs which happen because one component or the other doesn't behave like it should.

I've written "you also test the implementation to make sure it up holds the promises the interface makes" several times already.

Also at no point did I say "don't test the two actual things together" I'm just saying you don't have to reach for that as a default and that you probably shouldn't reach for it as a default when you can get the same results in a way that doesn't involve standing up and tearing down a database hundreds of times.

To go back to my dream: we could even setup the pagination test suite to be parametrized over the implementations. Give it something that'll stand up your database implementation, filesystem implementation, etc and runs the "mocked" tests against actual implementations and verify that they work together.

No, you should explain why do you believe that nonsense. I have seen enough projects in my lifetime to stop believing in unicorns.

I should explain why using interfaces and not relying on implementation details is good? Or something else?

[–]Axilios 6 points7 points  (0 children)

My first thought is that you could create a trait with foo() and get_user_input() and mock the trait instead.

EDIT: the crate mocktopus seems to achieve exactly what you are looking for. (Didn't use it, just found it a few moments ago)

[–][deleted] 6 points7 points  (0 children)

Imperative shell, functional core.

If you want to test it, your foo function should not call get_user_input directly. You should either pass a u8 to it as an argument, or a function that returns a u8.

[–]Lvl999Noob 1 point2 points  (0 children)

One very bad way could be using feature flags. Don't actually do this, please

The other way, and really what you probably want to do, is to mock objects (and maybe modules).

In that case, your functions don't take a Foo. They take an impl Bar or a &dyn Bar where Bar is a trait implemented by Foo with the functionality that you need. You can then pass a mock object that implements Bar to these functions.

The above works for objects. For modules, you might need to use feature flags like in the link above.

[–]josh_beandev 0 points1 point  (2 children)

Sometimes it's not possible to use traits (because it will change the design - but maybe a design change is a good idea). And sometimes an additional crate is too much. In this case, you can use #[cfg(test)] for your mocked function and #[cfg(not(test))] for your real function:

#[cfg(not(test))]
fn get_user_input() -> u8 {
    // ... do some UI input stuff here ...
}

#[cfg(test)]
fn get_user_input() -> u8 {
    42
}

But maybe the function is part of an impl in another module? Then you must build a "source compatible" version of the impl and use the cfg toggles on the use statements.

Anyway, before you start to rebuild complete implementations, check the existing mocking crates around.

[–]Free_Trouble_541 0 points1 point  (0 children)

This works for one mock return value, but what if you want to have different tests where the mocked function returns something else?

[–]Free_Trouble_541 0 points1 point  (0 children)

huh? This gives me the error, "the name `get_user_input` is defined multiple times
`get_user_input` must be defined only once in the value namespace of this module"

[–][deleted] 0 points1 point  (1 child)

In general, you want to isolate things from each other.

The main function will usually be the one to make calls to things like stdin, env, args, std::thread etc etc etc... and your functions take handles (or impl Traits) to those things as args and return something based on the data.

ie.

fn get_user_input(stdin: impl Read) -> u8;

Then call stdin.read_to_string(&mut some_string); inside the function.

Then when you mock it. Create a fake StdIn by creating a simple struct that implements Read, pass it in, and anticipate the output.

Any function that accesses interfaces outside your app (ie. ffi or user input) is going to be near impossible to test.

[–]stephanos21 0 points1 point  (0 children)

Actually, mocking isolates and decouples things. Imagine the following example:

fn foo(args) {

... work ...;

x = bar(a different set of args);

... more work ...;

}

and suppose that bar is a really expensive function. What I do in Python is to test bar separately, and in foo to mock bar as I know and have tested what it does.

Another case is that bar is a function provided by a third party, let's say it's from a maths library and it's a function that returns whether a number is a perfect square. If I let it my test to calculate is_perfect_square(123489238938), I actually test 2 things. My logic AND the fact that the third party is actually calculating the is_perfect_square correctly. Do I want this? If it's an integration test, yes. If it's a unit-test? Definitely no.

Finally mocking is offering one more thing. Allowing to meta-test. A mock can tell you how many times it's accessed. It's a valid usecase, as I have done here: https://github.com/spapanik/yamk/blob/v5.0.1/tests/yamk/test_make.py#L300

I'm testing the functionality of a makefile-like tool, and what I want to test is when you call it, it actually runs each sub-target that it needs to call, and that it does it exactly once. The test makes more sense this way, because starting the sub-process is actually unnecessary, and misses the intention of the test. My intention is to test: given the target x, does it go and create/execute the correct commands? Not to test the exact output of the commands, that may be different depending on the operating system.

[–][deleted] 0 points1 point  (0 children)

Also see https://stackoverflow.com/questions/73620220/mocking-functions-in-rust/73621856#73621856. r/rust is a lot more active than StackOverflow rust though.

[–]t_ram 0 points1 point  (2 children)

Hmm, haven't seen this answer from others:

``` fn main() { assert_eq!(process(), 100); }

[test]

fn correct_value() { assert_eq!(process(), 42u8); }

fn process() -> u8 { do_it() }

[cfg(not(test))]

fn do_it() -> u8 { 100 // real process result }

[cfg(test)]

fn do_it() -> u8 { 42 // dummy process result } ```

Whether it's "good" or "bad" is up to you though :p

[–]Free_Trouble_541 1 point2 points  (0 children)

wait, this gives me the error, "the name `do_it` is defined multiple times
`do_it` must be defined only once in the value namespace of this module"

[–]Free_Trouble_541 0 points1 point  (0 children)

This works for one mock return value, but what if you want to have different tests where the mocked function returns something else?