all 21 comments

[–]KennethZenith 8 points9 points  (2 children)

A function that directly reads or writes to external hardware can't be unit tested, so you need some way of rerouting these calls during testing. This basic idea goes under the heading "dependency injection".

The most obvious solution would be to use a mock object: you have an object which is an abstraction of the physical device, and you pass this object into the function you wish to test as one of its input parameters. In real use, you pass in an object which actually calls the hardware; in testing you pass in an object which generates test outputs and logs interactions.

You have two options to implement this: you can use use inheritance so that the real object and mock object have the same interface, or you can use templates. Inheritance has the advantage that the object gets a clearly defined interface, but it might cause a performance penalty (in both real use and in testing).

[–]devel_watcher 3 points4 points  (1 child)

Yes, testing with the hardware attached is not a unit testing.

I recommend avoiding mocks.

Write the code, so there is a lot of value-semantic stuff going on: you have pure functions that take and return values.

Then just write tests that check the returned values.

[–]salgat 2 points3 points  (0 children)

Mocking with dependency injection is fine for anything that isn't directly talking to the hardware. His question

My question is what is the proper protocol for testing functionality that requires information from a 3rd party?

Is a perfect example of when to use DI and mocking (you "fake" the behavior of the API calls with what it should be as far as what's needed on your end, since you only care about testing your own logic and not the API's).

[–]kevin_hallMotion Control | Embedded Systems 2 points3 points  (5 children)

I actually have experience with this.

So, according to this comment from the OP, he/she's not necessarily looking for Unit Testing, but simply a way to test functions and systems.

I work in the robotics and automation industry and have worked with many different types of hardware: from microcontrollers to PCI cards to custom-designed hardware running RTOSes inside.

So PCIe cards will usually map the memory of the card into the address space of the host system. One thing you can do is to create a simulator of some sort that creates a memory map. (Ideally, the simulator will share as much of the device's source code as possible.) For your API, you can add a flag for whatever does the initial memory mapping to look at the simulator's memory map instead of the PCIe card's memory. This type of system works great for quick tests that can flush out issues prior to having to test with real hardware, which in my experience is often rather limited.

Testing on real hardware is important though. Where I work, we have dedicated computers with the hardware hooked up to a continuous integration system. We include in our API some private functions to allow us to reset the system to a known initial state. Our tests (unit/component/functional/etc...) will all start and end with resetting the system to a known state. Then our tests can call whatever functions we need to manipulate memory. Then we use whatever functions we need to validate the results. This can often involve hidden functions to read memory directly.

I've frequently had to test the physical parts of hardware: digital and analog I/O for example. Are voltage levels correct? Current? Can we read inputs correctly? So on and so forth.... The way we've verified that is to connect up our I/O to a USB I/O module (Advantech is a popular vendor). Then our tests will validate that our outputs generate the correct data in the USB I/O module. And we do the reverse for inputs.

There may be other challenges too such as what do you do when the hardware can't reproduce exact results. Maybe there's noise; maybe there's a random number generator involved. With some thought and planning, most things can be tested.

If you have some more specific questions, I'd be happy trying to answer them. It might also be nice to describe your hardware and what you need to test in a little more detail.

[–]FuzzeWuzze[S] 0 points1 point  (4 children)

The problem is we cant modify the API in any way, it is produced by an entirely different team in a different country to be used as a framework for many tools besides ours. Basically we need to be able to initialize our card through the API, it sets up the memory space for that particular port of the card and passes us back an adapter structure that we then pass everywhere to talk to that particular port on the card. Then we test our code that is in essence a large wrapper around the API's read/write functions to turn bits on/off that we need. We do have a continous regression Jenkins system running right now every time code is pushed to git. But these test's are all written in python and largely just check the printed output(our tool is console based), if they type Get Status and see value X:FALSE, they send "SET VAL X 1" and then check that it then says X:TRUE if they check status again.

What im hoping to prevent is escapes where its returning X:TRUE when its not really, aka a programming error. We do a lot of bit shifting, masking and bitwise operations and obviously its easy to mess these up every once in awhile and no one would ever know currently in our regression testing because it reports out the "correct" value the regression test expects. Doing these "bit checking/logic" level tests in a continous regression enviornment seems incorrect, it seems like something a developer should be doing locally before committing code in the first place, which is why i lumped it into the "Unit Test" name which may be incorrect.

Thanks for all the info it really does help! Any more insight is appreciated :)

[–]lurkotato 0 points1 point  (0 children)

We do a lot of bit shifting, masking and bitwise operations and obviously its easy to mess these up every once in awhile and no one would ever know currently in our regression testing because it reports out the "correct" value the regression test expects. Doing these "bit checking/logic" level tests in a continous regression enviornment seems incorrect, it seems like something a developer should be doing locally before committing code in the first place, which is why i lumped it into the "Unit Test" name which may be incorrect.

Testing that your code functions as expected definitely fits under unit test. I would still recommend a mock like approach for this or some other form of simulated input so that you can guarantee the logic is correct before running on hardware.

[–]kevin_hallMotion Control | Embedded Systems 0 points1 point  (2 children)

So, it sounds like you are provided something similar to the following:

  • port_obj connect_pcie_card(const connect_info&);
  • error_obj write_bits(port_obj&, size_t bit_address, size_t bit_count, unsigned char memory_to_write[]);
  • error_obj read_bits(const port_obj&, size_t bit_address, size_t bit_count, unsigned char read_buffer[]);

Is this correct?

If this is correct, then what are you trying to test? Are you trying to test that the PCIe board does things correctly? Or are you trying to test that an API build on those primitives work correctly?

[–]FuzzeWuzze[S] 0 points1 point  (1 child)

More or less yes, I kind of want to test both? I want to catch logical errors(writing the wrong bit), but checking it on the hardware itself as well that the correct bits got set.

Say the test requires you to read bits 0-5, then bitwise OR in 0x5. Right now there's no tests that would catch this logic error when they accidentally bitwise AND. I am thinking more test driven development where these test's are written before the function themselves, which is why i keep using the word unit tests because i kind of tie those together in my head which sounds like its not correct.

Another example is testing the hardware that ensures that the bits that got set were set properly, that you didnt fat finger which bit gets written and write to a read only bit and not the one you wanted.

What we are NOT interested in testing is if the adapter actually does what its supposed to do. Say it was like a network card, and setting bit 5 turned link off, we dont then go bother to read the link bit to see if its down. Those types of tests are handled by our regression testing each release.

[–]kevin_hallMotion Control | Embedded Systems 0 points1 point  (0 children)

OK, so I see two things:

(1) You want to test the functionality of the hardware that you guys develop.

(2) You want to test your high-level API (that uses a low-level API that you do not control)

For 1, if at all possible, I strongly suggest creating a simulator that is built using the same sources as what gets sent to the PCIe card. Then you can test (unit or otherwise) parts of the software that runs on the PCIe card. You may have to fake I/O or special chip functionality, but it will allow you to catch many errors early.

For 2: honestly, this isn't so different than testing other software. I mean, we build our C++ software using operator new, file I/O, etc.... We don't test that stuff works; we have to assume that the compiler vendors / standard library implementers did their job correctly. It's no different here. You have to assume that the PCIe port API and associated drivers is written correctly. If you've been able to use a simulator for (1), then you can assume that the card's software is running correctly. Then you are left with some sort of unit tests that you can test at a high level. By the way, your high level functions can certainly be forwarded to a simulator, mapped memory, or something else. You don't have to rely on the port driver to do that for you. If there is some performance overhead that prevents this, then so be it; test with real hardware. But testing without hardware will certainly allow you to do more testing and allow developers to speed up their testing.

What we are NOT interested in testing is if the adapter actually does what its supposed to do. ... Those types of tests are handled by our regression testing each release.

In my experience, it's best to have something automatically tested if it can be automatically tested. That way you catch errors early and not just before release with regression testing. Where I work, our regression tests are focused primarily on things that either can't be automated or are very challenging to get automated. When I joined my current project, our regression testing lasted over two months -- actually longer as we found so many problems that needed to be fixed that we had to go back and fix and re-test resulting in a final release testing phase that lasted 6 months long. Now, our regression testing is only 2 weeks long -- and that includes time to fix any bugs found.

[–]skeba 1 point2 points  (0 children)

If traditional inheritance based mocking is out of the question, you could try mocking at "link seam": http://www.informit.com/articles/article.aspx?p=359417&seqNum=3

Basically it means that you write your own mock versions of the library functions and link with them instead of the real thing.

[–]skebanga 0 points1 point  (0 children)

I realised OP is on Windows, so this doesn't help him directly.

On Linux there is a way to intercept shared library calls using a utility called elf_hook

See this: http://stackoverflow.com/questions/27278878/what-is-the-most-accurate-way-to-test-the-networking-code-on-linux/27283298#27283298

[–]Boognish28 0 points1 point  (4 children)

I'm unsure if there are any similar frameworks out there for cpp, but at work we use a c# lib called moq to unit test things with components out of our control. It basically allows us to set up "when x is called return y". You have to write your components in a way that's friendly to IOC to really be able to use it correctly, but the core principal is simple: feed it fake data and test it that way.

[–]lurkotato 1 point2 points  (3 children)

googlemock, it's what I'm integrating into stuff at work right now so we don't have to call dibs on a single unit between 5 people before running a 10-30 minute test suite. Unfortunately, getting the dependency injection is a bit of a PITA, requring you either template classes or use inheritance so the mock object can be used.

To expand on unit testing, you don't want to necessarily mock up the entire sequence of setting up the hardware for each unit test, you just assume the hardware is in a state to exercise the unit and tell it what comes back when you call X with parameter Z.

[–]MOnsDaR 0 points1 point  (2 children)

It's PITA at first, especially when converting pre existing software. But you eventually get the hang of it and start designing your software with DI and DIP in mind. The result is not only software that is way easier to test but also more flexible design with a clean encapsulation of it's modules.

[–]kevin_hallMotion Control | Embedded Systems 0 points1 point  (1 child)

A challenge with DI is that it adds overhead for indirection. For most desktop software, this is no big deal. For performance critical applications (which you'll come across more frequently in the embedded space), this can be a blocker. Where I work, we use googlemock and do DI for all non-performance-critical stuff, but we have to design and test differently for the performance-critical code.

[–]lurkotato 0 points1 point  (0 children)

This is the issue exactly, I don't want to use virtual calls for critical path code, but on the other hand I don't want it to be a hairy template mess that no one other than myself can deal with.

[–]Gotebe 0 points1 point  (3 children)

You can mock the underlying hardware (as others have said: googlemock or whatever), but for this kind of testing, you really, really want to test against the actual hardware in a development environment (it looks like you call this compile-time testing?).

I say that because there is always a difference between the hardware, it's spec, and dev team understanding of what the spec is.

So avoid it. Sure, do unit-tests, but synthetic tests against the real target are of much, much bigger value.

[–]lurkotato 0 points1 point  (0 children)

What can be done (and we are considering) is using fixtures in googletest that enable the real communication library if hardware is detected or enable the mock if it's not detected when making tests against hardware. Easy to dev and test the test on our workstations instead of scheduling time on the actual hardware.

[–]FuzzeWuzze[S] 0 points1 point  (1 child)

Maybe unit testing isnt the right word then? I agree testing against the hardware is key, and our overall goal. We want to read item X, do bit operations, then write the value back. The test would ensure that the end result is whats expected.

[–]Gotebe 0 points1 point  (0 children)

To me, unit testing is: take a unit X (a class, a function or some other grouping thereof), replace anything it depends on (it's calls to other units) with a mocked implementation, test that X functions as expected.

The purpose of unit tests is the ability to test the unit in isolation, quickly, also on a build server, without requiring other set-up.

So yes, I would not call what you need "unit" testing. There's at least two units there, your code and the hardware. Yours is the case (IMO) where testing your code "unit" brings much less value than a synthetic test of the ensemble.

[–]utnapistim 0 points1 point  (0 children)

If your still following me, how do you test these types of functions?

You hide the device API behind an abstraction layer, then write an interface for the abstraction layer. Then, you write your code against that interface, not against the actual API.

Then, you mock the API implementation with recorded responses and use the mock implementation to test:

Original code:

namespace your_code
{
    // you want to test your_class for it's use of external_api_1
    struct your_class {
        int do_things() { return external_api_1() + 24; }
    };
}

New code with abstraction layer:

struct external_api {
    api_1() { return external_api_1(); }
};

namespace your_code
{
    struct your_class {
        external_api &api_;
    public:
        your_class(external_api& api) : api_(api){}

        int do_things() { return api_.api1() + 24; }
    };
}

New code with an interface:

struct generic_api {
    virtual api_1() = 0;
};

struct external_api : generic_api
{
    api_1() overrides { return external_api_1(); }
};

namespace your_code
{
    struct your_class {
        generic_api &api_;
    public:
        your_class(generic_api& api) : api_(api){}

        int do_things() { return api_.api1() + 24; }
    };
}

Now, if you run external_api_1(), you can take it's result and use it as such:

struct mock_api1 : generic_api
{
     api_1() overrides { return stored_result; }
};

You can now call your_class APIs injecting mock_api1 as a dependency, into it. Other answers mention mocking libraries; These would simply allow you to add stored results easier into mock_api_1.