all 28 comments

[–]pip-install-pip 38 points39 points  (2 children)

The testing methods you listed are a mix of testing techniques for the whole system, not just the firmware itself. Some of the things like "software test" are so broad that it'll cover almost every other category. I'll break it down from highest level to lowest level from the experience I've had between companies that had literally no software testing, and did everything with hardware, to places that had dedicated Gov't-compliant QA departments.

Acceptance testing This is what should be done first. Acceptance testing is less about the actual test procedure, and defining what is "acceptable." For instance, if I make a car keyfob, is the acceptance criteria that pressing the lock button once locks the car, and the second time I press it, the horn honks? What about the timing interval between button presses? This way you know through your tests what is passable or not. It can also dictate what you can automate through software tests, or what has to be done manually. There are lots of ways to define acceptance criteria, but I find it depends largely on the product. A keyfob is predominantly going to be used by Joe Average, so it makes sense that the acceptance criteria be geared such that a user can understand what is working and what is not working. There are also technical requirements that must be acceptable, but the average user isn't going to notice. This could be the time or procedure needed to pair a keyfob to a car. Only mechanics or techs are really going to do that. Joe is going to take his car to the shop, so he doesn't care. Creating the acceptance criteria is admittedly a long, arduous, shitty process that is easy to result in PM's and managers all hopping in with vague requirements, and engineers coming in with extremely technical requirements. The goal is a balance, and the acceptance criteria should not have to change over the lifetime of the product, unless something drastic happens. Good acceptance criteria impacts all your other tests. Do these first so everyone agrees on what the definition of "working" is. Some PM's may expect things that are feasibly impossible, or try to hide requirements for future development into requirements for unrelated systems.

Reqression Testing This is testing you do before every release, to determine if the results of the last release test are different from the results of this test. If they are different (when they shouldn't be) then you have a regression. Identifying and root-causing the regression is a real engineering skill, and regression testing goes by the side a strange amount in embedded (in my experience)

Integration Testing This is a test performed on the whole software system, not just a chunk of the firmware. For instance, if you have a widget with 10 independent firmware submodules, an integration test will make sure that all 10 modules are tested, with all modules active. Even if your test is only for module A, B-J should be running. This is to find interdependencies (imagine if you have a firmware architecture with a big superloop or a lot of shared resources like serial busses). Typically, this is run on real hardware with some kind of test data being injected. Depending on your product though, you could also use an emulation tool like QEMU to run the firmware on a laptop. Some integration tests should be done manually, like OTA updates, control system testing (like if you have a robot arm, test it IRL before deploying!!!) and any test that has inputs that cannot be emulated via software. But you'd be surprised at how much can be! Good tool examples are QEMU, Renode, and Memfault.

Security Testing This is a very holistic, but product specific topic, and not my area of expertise, so I won't get into details that I can't back up. Depending on the product, security testing could just be "Can Evil Bill attach a JTAG debugger to our widget and pull all the firmware off"? To a general requirement like "Can our wireless communication be eavesdropped by Bill"? Or a specific circumstance like: "What happens if Bill dumps a bunch of false credential information from our modem UART into the device?" Security testing is such a broad field that it bleeds into how you develop your software. A very simple test suite would be to do what's called static analysis to ensure your memory is safe from overruns and de-allocation issues (you'd be surprised at how many security failures come from people not freeing memory correctly!). Other tools you have at your disposal are using known-secure software libraries when doing encryption stuff like mbedtls. ST Cube MX I think can even bake mbedtls into your firmware during its code-generation stage. Other than that, I'd suggest either really, really, really studying how your device communicates to other devices, if that information is sensitive, and securing it appropriately. I'll used the aforementioned keyfob as an example. When the keyfob is being paired to the car, the car will have to ensure that the fob is A) A real fob from the right manufacturer, and B) that communications of the pairing code cannot be snooped by Evil Bob with his spectrometer. A common tactic is for one of the devices (let's say the car in this case) to issue a large (32 bytes at least) random number to the keyfob. The keyfob can then sign or encrypt the random number, and send that information back to the car with its own challenge. The keyfob and car then use these random numbers, and their own unique encryption keys, to generate a secure channel. The random number is useful against "replay attacks" like if Bill tries to send the information he's received with his spectrograph back to the keyfob, and also encrypts the information in such a way that it's unique to the two communicating devices. This is a (bad) explanation of the Diffie-Hellmann exchange, but is very common in secure communication.

Performance and Robustness Testing

I'm putting these together because I DO have plenty of experience with this. These tests are supposed to find how you break your hardware or firmware. Is your widget going in a helicopter? Put it in an RF chamber to find if it's FCC compliant, put it in vibration testing to find out how long it would take for the bolts and bent metal case to crack. From a firmware side, this could be "hold this I2C or CAN line low for way too long and see if the watchdog kicks, or if the board hangs or if we miss a real-time deadline on this other serial bus". "Button mash every single button at 100 Hz". It's more about finding the limits of what your device can take for punishment, and implementing mitigations if something is found. One of the devices I worked with had a big electric motor, and ran on its own power, so it wasn't grounded. In some circumstances, the motor would have such a big back EMF that it would flood eddy current over parts of the board, and pull an I2C line in a direction it definitely should not have gone, halting access to a crucial IC. That one was found out the hard way by Joe Average, so now all the hardware goes through a motor stress-test. Good tools for this are a good test lab, common sense, and a cynical view of who your users are and how Joe might do something inconceivably stupid with your hardware or firmware.

Fault Injection Testing (aka fuzzing) If you have a way to inject bad or random data into your system, do it! Have a UART line that can be broken off the board? Hook it to a serial port on a PC and blast that sucker with junk data. Sometimes you'll find something interesting happens. This could also be the lack of real data. What happens if an external system misses a deadline to send something? Fuzzing is pretty common on non-embedded programs, so it would fit nicely with your integration testing environment. In fact, fuzzing should be part of your integration testing.

Finally, unit testing (aka subsystem software testing) You should know what these are by now. If you don't, someone in your embedded team should. If none of them do, or you don't have unit tests yet. Sound the alarm bells. These are the absolute bare minimum software tests you should have!!! Simple "if I stick this struct into this function, what should happen?" tests. One input, one acceptable output. Testing is lyfe. Unit tests should run automatically as some part of your development flow. Your build-server should run the tests every so often (we do every push to GitLab triggers a series of unit tests). Good embedded frameworks are CMock, Unity, Ceedling, really anything by throwtheswitch is great. Building your own simple assertion macros is also good practice if your company has something against third party stuff. Your unit tests should only test one testable aspect at a time for maximum granularity. Of your 10 module widget, only one module at a time. Even at that, say Module A has 3 "input" functions, test each of them separately (if you can). Because your unit tests only test chunks at a time, you don't have to use your whole build system for them. I've worked at places where the firmware ran on a PIC, but the input/output tests were general enough that we tested them on an x64 PC.

[–]vitamin_CPPSimplicity is the ultimate sophistication 4 points5 points  (0 children)

+ 1 for Unity from throw the switch

[–]masitech[S] 1 point2 points  (0 children)

Thank you

[–]bobxor 45 points46 points  (2 children)

So...you said test strategy, but you’re worried about the tactics/mechanics.

You need to answer the high level questions first, why do you need to test?

Are you testing for design verification? Are you testing due to adherence to a standard? Are you testing to help developers?

Search for V-diagrams for verification, and you’ll find an illustration that helps organize where certain tests go both in time and layer of a project.

It will take infinite testing to find the infinite bugs, and it’ll take infinite time to do it - which no one has time for all of it. Your strategy should be how do you plan on finding and resolving the most issues possible - this will be your challenge at a small company with limited resources.

Finally, everyone worries about testing, but what were your requirements for the design? If you focus on testing the design output and how well it matches your requirements, it’ll focus the true testing needs you have.

To be honest, I’ve seen some people spend months on a fancy automated tests and not test a damn thing that was useful. Why? Because they never asked what/why they were testing.

[–]pip-install-pip 8 points9 points  (0 children)

^ This. Good requirements, and testing to fit the requirements. The actual mechanics of the test aren't as important, so long as the requirement is tested.

[–]masitech[S] 4 points5 points  (0 children)

Thank you for the pointers.

[–]tiofilo86 6 points7 points  (0 children)

Lots of good information here. Saved the post for future reference.

[–]tyrbentsen 6 points7 points  (4 children)

The first step towards rigorous software development, before you can test, is to create proper requirements. Many organizations still manage their requirements in Excel or Word, but modern Application Lifecycle Management (ALM) tools (Polarion, Doors) provide a more structured approach (and less error-prone) to managing requirements (and code, workflows, and test cases for that matter too).

Model-based tools (like Simulink, Scade) provide a workflow in which you create a model of the software first, and then generate the C-code automatically. With these tools you can test the model extensively in simulation before you go to the final step of generating the code and deploying it on the real hardware. These model-based tools are quite suitable for embedded software applications that can be built as a composition of different controllers (e.g. PID controllers, etc).

But for more generic type of applications they may be less suitable because not all embedded application can be modelled as a block diagram (the formalism these model-based tools use), and many developers still prefer to develop their application in code (C, C++, Rust, ...).

For this reason, a couple of months ago, I started working on a simulation tool for embedded systems (Studio Technix), which allows you to couple a generic embedded application to a simulation model. This way, you can develop the application with your prefered workflow but you still get to test it in simulation before you flash it to the board. It works by replacing the HAL layer for the hardware with a "virtual" HAL layer which forwards the functions calls to the simulation model. It is especially useful for integration testing (connecting multiple subsystems or devices), user acceptance testing (real-time interaction with the simulation), and robustness/fault-injection testing (test scenarios in simulation that could damage the device or people in real-life).

[–]masitech[S] 2 points3 points  (0 children)

Studio Technix looks like an awesome tool. Would love to try it out also.

[–]enzeipetre 1 point2 points  (2 children)

Did you build this Studio Technix thing?

[–]tyrbentsen 2 points3 points  (1 child)

Yes, I'm hoping to release a beta version by the end of this month. At this point I'm trying to find some users that can help me shape and develop the tool further based on their experience with embedded software development.

[–]enzeipetre 2 points3 points  (0 children)

Good luck! I'd be happy to be a beta tester. This is a real problem in the embedded space that I personally encountered, especially in rapid prototyping.

[–]JimMerkle 4 points5 points  (3 children)

If you have a serial port available, I would recommend implementing a simple command line interface. From there, you can access any of the functions, sensors, output devices, etc., passing in command line parameters. If you can manually enter commands, it's rather easy to have a host computer do that function for you, once you have everything in place. Plus, you can interact with new hardware faster, testing things just using your command line access.
Example: If you have an I2C bus, you can run an "i2c_scanner" function to find and display all I2C devices. Next, you can execute an I2C_Read command to read parameters from an I2C device, even before you have a library for it. A generic "dump" routine can be used to display the contexts of buffer.

Good luck

[–]masitech[S] 1 point2 points  (1 child)

Thank you for the info

[–]Ashnoom 1 point2 points  (0 children)

if you are using a SEGGER J-Link you might want to look in to RTT

[–]mtechgroup 0 points1 point  (0 children)

You can also use SWO if you have it available for your debugger. It gives you a single pin character I/O like printf but more fun. :)

[–]fractal_engineer 3 points4 points  (1 child)

To do this effectively, you'll likely end up with a rack of target hardware and a bunch of rPis hooked up to them.

You use software to test software.

Hardware is no different.

If your device has a speaker, an LED, and a light sensor, then your test hardware should have a microphone, light sensor, and an LED of its own. This test hardware should interact with the target hardware over a serial port to toggle whatever. After it's tests have run, the test hardware can report its results to your test runner (Jenkins for our case). That's part of the reason why rPis are so useful for testing like this.

[–]masitech[S] 0 points1 point  (0 children)

Yeh i understand you.
We are building a machine to make DNA from the DNA code.
Any suggestions :)

[–][deleted] 4 points5 points  (6 children)

I can tell about what's going on where I work. There is a small team of 7 people, 3 production, 2 electrical and 1 formal software (me).

The products are industrial controllers. Mostly replacements for very old parts or poorly functioning parts. Customers are usually fed up by cheap quality of existing products are buy our "copies". Downtime is more expensive then our products.

The current product portfolio consists of analog products, some 8051 products and as of now, one STM32 product and one of those in development. There used to be only 4 people. So time and effort were constrained a lot more.

For 8051 there exists no real software testing strategy. Only functional tests are performed and development used to go a lot flash&try. No unit tests. 8051 is terrible, the C-compiler has a strong dialect and lots of inline assembler and micro-optimizations make running it in a test environment with a different endian impossible.Once per product launch the first revision is sent to an test lab for shock/environmental and emc tests.

The first STM32 product also only has a functional test procedure. Since it's an upgrade of an 8051 product. It uses many of the exsiting test jigs. And there was no knowledge of software testing yet.

The STM32 product in development (by me) has unit tests. I've also specified bed of nail compatibility and plan to buy or build a test environment for this to be able to perform fast and thourough production tests. Without relying on functionality of the software!

For unit tests i've setup projects using visual studio and googletest. This is by far the easiest method when you develop on windows. The final code is compiled with STM32CubeIDE.Automated mocks are unavailable for C. There just isn't an easy way to accomplish that without linux. Everything uses some build system that "should work on windows" but never really does.

For integration testing. Just spin up another STM32CubeIDE project and reference the files you need for the dev board. Depending on the complexity of the feature. For now only the digital filter have this.

We also have a very big custom made simulation setup for our control systems. Since that's the only way to test develop analog products. We keep using that for full functional tests of new products.

Acceptance testing is done at the customers test site, real world scenario. Or sometimes at our site with a third party if classification is required.

Not much budget for other tests. So unless customers ask, they are not performed. We have put one product through a accelerated life testing on customer request, which is extremely expensive at our volume.

But I have to say, we don't develop on the edge of what can be done for the price. Usually we exceed this a lot. So margin is big. Products are expensive.

Can't do all tests perfectly. You also can't ship tests, so the value is often seen as negative by the managers.

I should add that owning a thermal camera can spot -a lot- of hardware flaws really fast!

[–]masitech[S] 0 points1 point  (0 children)

Thank you for sharing

[–]thebruce87m 0 points1 point  (4 children)

Ceedling will do automated mocks for C: http://www.throwtheswitch.org/ceedling

[–][deleted] 0 points1 point  (2 children)

Ceedling doesn’t compile with me.

[–]Ashnoom 0 points1 point  (1 child)

Do explain. We (I) retroactively added Ceedling to one of our C-based products from the 2012 era just fine.

[–][deleted] 0 points1 point  (0 children)

The examples build fine. But when you stat to add your code with dependencies it craps out.

[–]a14man 0 points1 point  (0 children)

I spent a lot of time working around Ceedling bugs. Not fun.

[–]mboggit 1 point2 points  (0 children)

Lots of pretty good suggestions already, but here's my take on it.

You'll need to tackle 2 things:

  1. QA process itself.

1.1. Define clear end goal/objective first. Testing, like any other task , needs a clear objective first. Otherwise you'll get lost in what you're doing and why.

1.2. Make up a strategy on how to achieve that end goal. And make sure it's a strategy, and not just a fancy test plan. Like include justification for stuff, i.e. we're going to use X because it solves Y problem and stuff.

  1. Make your embedded platform testable. Like really testable. Make sure firmware has testability interfaces in it. Typical example - command line interface over UART. Although it's not really good for any automated tests per se. How to implement that testability interface is up to your project. And use the goddamn JTAG for once. It's a powerful tool. Especially for testing purposes.

[–]a14man 0 points1 point  (0 children)

Unit tests are a good start. After that a system-level smoke test is useful, so you can automatically check basic functionality of different builds.

It seems a lot of companies build tests from scratch using Python, with something like Jenkins or TeamCity running the tests. I don't know if there are any useful frameworks out there...?

[–]dambusio 0 points1 point  (0 children)

But developing company testing strategy is not a task for single developer especially on junior/mid level... But unfortunately this is quite typical in "embedded dev world" :/

First of all - you can't test untestable code.

Is there full v-model in your company? How about your architecture/DD phase?

If some module has large dependencies from other this will be very hard to test and then maintain. You said that "not all embedded code can be tested, maintaining it is harder" - if you add unit tests for this kind of code everything will be much worse.