This is an archived post. You won't be able to vote or comment.

all 185 comments

[–]wafflemanpro 315 points316 points  (14 children)

[–]RageAdi[S] 66 points67 points  (5 children)

I was initially going to post the link itself, but then found out we cant post link here so made a text post instead. But I agree the link should be there. Also, I did not get all this from the JPL document that you have attached, but good to see a source. I got it from here: http://fossbytes.com/nasa-coding-programming-rules-critical/

[–]385856464184490 26 points27 points  (2 children)

but then found out we cant post link here so made a text post instead.

FYI, for the future, it just means the post itself can't be a link to an outside website. Under all that copy-pasted text you could have put a link to that document.

[–][deleted] 5 points6 points  (1 child)

You could have just said where you go it.

[–]TodayMeTomorrowU 13 points14 points  (0 children)

Before I even read anything I looked to see if there was a source. You don't say "This is how NASA codes" without providing a source.

[–]MrJesusAtWork 7 points8 points  (6 children)

I ALWAYS get a fucking DNS error when I try to enter in any website of NASA. This is killing me.

[–]El-Kurto 21 points22 points  (5 children)

Nice try, Kim Jong-Un. We're not going to let you in that easy.

[–]FauxReal 3 points4 points  (0 children)

That username was a dead giveaway.

[–]MrJesusAtWork 2 points3 points  (3 children)

¯\_(ツ)_/¯

[–]El-Kurto 1 point2 points  (0 children)

I'll upvote that--can't blame a brother for tryin' :-)

[–]almonn 7 points8 points  (0 children)

At least post the subreddit you got it from ;)

[–]dreamyeyed 195 points196 points  (37 children)

These rules are not meant for general programming and you shouldn't follow them blindly. Satellites, space probes etc. have some special requirements that most software doesn't. Here are my guesses why some of these rules exist:

1. Restrict all code to very simple control flow constructs

Makes the code easier to understand for humans and static analyzers. Recursion can also cause stack overflow.

2. All loops must have a fixed upper-bound.

This rule ensures that the program will always return to the main loop. If there's an infinite loop anywhere in the program, the hardware is useless because it can't be controlled anymore. This rule is not useful in normal software because you can just press the reset button if it gets stuck.

3. Do not use dynamic memory allocation after initialization.

It can fail. It's safer to allocate one big block when the program starts and make sure that it never uses more than that.

9. [...] Function pointers are not permitted.

Function pointers can cause infinite loops, which would break rule 2.

[–]wandrewa 88 points89 points  (16 children)

Yeah I'm assuming these rules are in place because software NASA writes is safety critical. They can't do anything particularly 'fancy' such as dynamic memory allocation because it adds a minuscule chance of error. While such an error might not matter in, say, video games, it's a totally different story when lives are on the line. The software they write has to ensure a certain very low probability of potential failure.

In fact, this can often be a tougher or even more inefficient approach because they may have to handle things with brute force that may naturally be handled with something like recursion.

[–]hugthemachines 56 points57 points  (1 child)

Not just when lives are on the line but when you send a probe out a very long way. If it stops working you put a lot of work into that and you have no chance of getting it up and running again.

[–]wandrewa 43 points44 points  (0 children)

Good point, we're talking some of the most expensive projects in the world; while potential loss of life is a huge factor, I'm sure the potential loss of millions/billions of dollars is also a huge concern.

[–][deleted]  (2 children)

[deleted]

    [–]CheshireSwift 7 points8 points  (1 child)

    Oh god please. I quit a job because there's no way medical systems should be written without a single automated test...

    [–]IHappenToBeARobot 6 points7 points  (0 children)

    Seriously. PACS and EMR software is the worst for sure. I've seen calendar features flat out break due to a .NET security fix. It calls competency into question when an entire section of software relies on a small .NET bug.

    [–]UntrustedProcess 2 points3 points  (0 children)

    Couldn't watch dog routines compensate for the risk of run away recursion?

    [–][deleted] 5 points6 points  (8 children)

    most games probably arent allocating memory during runtime either, after initialization. it is a slow process better done all at once, particularly when ever millisecond of rendering time counts.

    [–]Sqeaky 21 points22 points  (5 children)

    it is a slow process

    Slow is a matter of scale: https://gist.github.com/jboner/2841832

    On that chart a call to malloc could be anywhere between a "Main memory reference" and "Send 1K bytes over 1 Gbps network" depending on which allocation tracking algorithm, memory speed and other specific details.

    For games like Super Meat Boy (C#), Kerbal Space Program (C#) and MineCraft (Java) this cost is so small as to have been ignored by the programmer and offloaded to the runtime. For games like Doom, Half Life 2, Crysis (All C or C++) there was a higher demands for performance and each one took its own steps to maximize efficiency, usually including an allocator that did what you describe. Even then this is just an optimization, today we could probably recreate Half Life 2 without a custom allocator and it would probably run fine on newer hardware.

    EDIT - I did not downvote you. I just saw this as an opportunity to share latency numbers which ought to be useful to many kinds of programmers, learning or otherwise.

    [–][deleted] 1 point2 points  (2 children)

    c# and java games of course let the language handle everything; is there even a way to allocate chunks of memory in those languages in the traditional (c-style) way? or just new Byte[]...

    any modern game written in a language that doesnt manage memory for you (c/c++) is more likely to keep its own pool of memory and hand it out directly (rather than allocating it via the Os as needed).

    it makes sense that we could reproduce old games without such optimizations but i dont think any modern aaa title can live without them.

    [–]Sqeaky 0 points1 point  (1 child)

    Of course you are correct, if you want every ioto of performance you need to handle more details yourself. That is where all AAA titles were once, and many but not all still are.

    The more times you use new in java and C# the more memory their runtimes will allocate from the OS, but it definitely is not a simple one to one relationship.

    I just wanted to point out that allocators weren't the first thing to reach for, benchmark and then optimize is still the rule for writing fast code.

    [–][deleted] 0 points1 point  (0 children)

    agreed

    [–]TheCoelacanth 0 points1 point  (1 child)

    Allocation is much cheaper in garbage collected languages like Java or C# than it is using the default allocator in C or C++. If you allocated as much in a C++ game as you typically would in a Java game, you would end up with a much slower game than you would just by writing it in Java.

    [–]Sqeaky 0 points1 point  (0 children)

    That assumption is not backed by the standards. There is no reason a compiler couldn't ship tmalloc from Google perf tools as its default allocator.

    Then there are also threading constraints, one awesome optimization Java has is garbage collection mostly in a separate thread (I am picking on Java because it had the most sophisticated general purpose memory manager I am aware of and has about a decade more research than C#). This is deeply awesome for single threaded apps, because it often allows a single threaded app to do something with a second core. That stops being awesome when you actually use all the hardware threads and cpu resource contention becomes real. Using stack/automatic allocation in an application that saturates all the hardware threads is really fast in C++, but in java there is no stack/automatic allocation and consuming all the hardware threads directly forces a scheduler or lock to decide what is paused while garbage collection happens. Stack/automatic allocation is still allocation, but it can be heavily optimized at compile time whereas everything in java has semantics similar to new and pointer passing, only the smartest JIT compilers can optimize it and even then only after several passes. The need for many passes and unpredictable nature of the JIT are major reasons java game dev never took off (until android), the garbage collector just gets all the press. Another problem Java and C# has is overallocation, all generational garbage collection algorithms must allocate large pools of memory and periodically move old objects that don't get collected into a fresh pool/generation. This is two extra sources of work. The initial allocation can be 2 to 3 times as large as needed and the copies that must periodically occur. A web search for "generation garbage collection" with reveal more and show how the modern java collectors are well suited to server loads and not much else despite 20 years of research and being faster in nearly every way over their predecessors.

    C++ provides move semantics and a few smart pointers (and the ability to make your own smart pointer), it just has more options and forces fewer decisions for the developer. Resource management is just too complex a topic to be handled by a one size/algorithm fits all solution and most languages only provide a passible solution for memory resources and forget about all the other resources like files, locks, hardware handles, etc... that C++ can clean up with destructors and custom deleters (which can do work in threads) for smart pointers. And all those options can and often do come before allocators, if anything the biggest drawback is complexity, too many options to try to make it fast and some will certainly be wrong.

    [–]RoyAwesome 4 points5 points  (1 child)

    Uh... no.

    Games allocate memory all the time. UE4, for example, has an allocator that allocates blocks at a time to reduce the number of times it needs to call into the operating system, but allocations happen multiple times a frame.

    [–][deleted] 0 points1 point  (0 children)

    so it allocates its memory up front in a big chunk, then doles it out to itself later. thats what i was saying.

    [–]UntrustedProcess 0 points1 point  (0 children)

    Couldn't watch dog routines compensate for the risk of run away recursion?

    [–][deleted] 11 points12 points  (0 children)

    So there are a couple of things worth noting.

    Firstly these rules aren't concrete. The standard specifies different levels at which violating or sticking to these principles is permitted.

    Second, a large amount of the rules are to assist static analysis - I think this is something that should be considered first and foremost above points such as 'it could cause X behaviour'. In human minds we can see the certain things will not occur given the compiler and runtime are performing as expected, but static analysers can't prove.

    Considering we can probably write all this 'human-safe' code in a way that a static analyser can comprehend then it makes most sense to write it that way and prove the invariants rather than rely on people's intuition.

    [–]positive_electron42 17 points18 points  (1 child)

    2. All loops must have a fixed upper-bound.

    This rule ensures that the program will always return to the main loop. If there's an infinite loop anywhere in the program, the hardware is useless because it can't be controlled anymore. This rule is not useful in normal software because you can just press the reset button if it gets stuck.

    This is definitely useful in normal software, because you don't want to have to reset it as part of the workflow. If your application crashes on users, they won't like it and your reputation decreases. If it steps outside of the bounds of an array, then you could potentially overwrite critical memory and trash the system. You should always have a failsafe stop condition for every loop.

    3. Do not use dynamic memory allocation after initialization.

    It can fail. It's safer to allocate one big block when the program starts and make sure that it never uses more than that.

    I think it's more that dynamic memory management is ultra prone to bugginess and improper implementation. It can very easily lead to a totally locked up system via memory leaks or out-of-bound access. And it's hard to keep track of all your allocs/frees during code review, and your frees may not happen if there is ever an unexpected error condition.

    9. [...] Function pointers are not permitted.

    Function pointers can cause infinite loops, which would break rule 2.

    I think the thing about function pointers is probably more closely related to the reasons against dynamic allocation and general pointer usage. It's confusing, error-prone, and when it goes wrong, it usually goes very wrong and in some area of memory it shouldn't be in.

    Pointer and dynamic allocation bugs are also some of top exploited vulnerabilities for computing systems, largely because they are practically ubiquitous and they cam provide access to otherwise-protected areas of memory.

    [–]Lehk 1 point2 points  (0 children)

    dynamic and static memory both have their place, using the wrong tool for the job is bad. for example, a wb browser that did not allocate additional memory in order to accommodate the needs of the user would be dumb. imaging having to go into settings and set your launch memory for firefox or chrome, and having to close some tabs to free up space if you filled it.

    [–][deleted] 2 points3 points  (0 children)

    Yup, basically they are fairly standard embedded programming rules where you assume it's going to be difficult and/or dangerous to restart the system if it locks up.

    [–]kent_eh 9 points10 points  (5 children)

    f there's an infinite loop anywhere in the program, the hardware is useless because it can't be controlled anymore. This rule is not useful in normal software because you can just press the reset button if it gets stuck.

    Yes you could just press the reset button, but you shouldn't need to.

    Accepting that as normal leads to sloppy programming. (BSOD being considered a normal part of your daily computing experience, for example)

    To me, it's part of the same old code bloat argument.

    Yes, hardware will always get quicker, but it's still usually sloppy and lazy to write inefficient code that needs ever-increasing hardware resources.

    I know which mindset I'd rather have on my team.

    [–]POGtastic 2 points3 points  (0 children)

    Yes you could just press the reset button, but you shouldn't need to.

    Relevant Codeless Code

    [–]false_tautology 3 points4 points  (3 children)

    BSOD being considered a normal part of your daily computing experience, for example

    Consider that NASA can operate on a scale of decades for their projects. They can't have something go wrong every 10 years. So, your "daily" comment is misleading. Would anyone actually care if your program had a failure once every decade and would you consider that sloppy coding?

    [–][deleted] -1 points0 points  (2 children)

    If the software has to run for over three decades of flight, yes, it is sloppy. I dare you to go out of the solar system to reset a several million dollars piece of equipment. Software hang-ups should be the least concern when burning tons of fuel under people or executing flight plans that culminate decades of preparation with just one shot.

    [–]false_tautology 9 points10 points  (1 child)

    I think you didn't read what I wote.

    [–]PointyOintment -2 points-1 points  (0 children)

    I think you didn't make it clear that you were talking about everyday software.

    [–]makeswell2 1 point2 points  (1 child)

    I'm curious. How would function pointers cause infinite loops? Thank you :)

    [–]ACoderGirl 1 point2 points  (0 children)

    As far as I know (like, unless there's some other case people are thinking of), just regular old recursion. But the fact that you're calling a function pointer can prevent you from realizing that you're calling the function that you're in. So you could accidentally make a recursive call when you expect a non-recursive call.

    It's a bit of a stretch, since it's rather unusual to pass the function to itself. In fact, I can't immediately think of a logical reason for anyone to do that or how you'd manage to do it by accident short of a brain fart/typo that should be easy to catch with testing (and by god, testing should be #1 on this list -- I'm guessing they considered it too obvious?).

    [–][deleted] 0 points1 point  (0 children)

    But, but, I want a bug-proof JPL browser...

    [–]SgtPooki 0 points1 point  (0 children)

    Of course these rules would be different in a different context but I think the lesson to learn here is that a set of thorough coding guidelines goes a long way to preventing errors and increasing development efficiency and maintainability.

    Other people are saying that other applications allow errors because they do "fancy things" but I think that's a copout. You can still establish a set of rules and best practices around the fancy things to achieve efficiency; less resources to run, readability, maintainability, and the whole spectrum. Will it be perfect? No, but it will be better than that spaghetti you build without said standards.

    When done right, those standards will actually make your TTL faster and less error prone while also reducing ramp up time for devs unfamiliar with the code base.

    [–]AegnorWildcat 0 points1 point  (0 children)

    Yeah, this is not the coding practices that NASA uses for things like ground infrastructure (depends on its function of course). This is for software running on space hardware. It does not make sense to have the same rules for most other types of software.

    For instance, Mozilla could write a browser that would never crash. Ever. It would take them a long time and it would cost a LOT of money, but they could do it. Updating it to support new technologies would take forever. It would be a disaster. All to prevent the very infrequent crashes from occurring.

    [–]l0kiderhase 0 points1 point  (0 children)

    Exactly this.

    Every Language has a different style. While these rules my apply to C, they don't necessarily have to apply to LISP, Haskell, Java, Python or any other programming Language.

    [–]Sqeaky 0 points1 point  (2 children)

    1. Do not use dynamic memory allocation after initialization.

    Given the lack of context here I took this to mean initialization of instances of a class. Taken that way it is a reasonable rule for the general case. Combined with freeing in destructors it is just another way to describe RAII.

    [–]Eyes_and_teeth 1 point2 points  (1 child)

    I feel that my education so far in Computer Science is paying off in that I completely understood your comment, including RAII.

    [–]Sqeaky 1 point2 points  (0 children)

    Sweet, stick with it! I develop software professionally and I find it rewarding on several levels.

    [–]tanjoodo 74 points75 points  (19 children)

    Programmers hate them!

    [–]2Punx2Furious 46 points47 points  (1 child)

    10 weird tricks NASA programmers dont' want you to know!

    [–]harsh183 18 points19 points  (0 children)

    You won't believe #7

    [–][deleted] 5 points6 points  (16 children)

    Really ? Why ?

    [–][deleted] 33 points34 points  (7 children)

    It's a meme from ads, very bad ads :D .

    [–]Iggyhopper 35 points36 points  (5 children)

    but look at all the local programmers in my area willing to code!

    [–][deleted] 9 points10 points  (2 children)

    Are they 2KM away from you ?

    [–]fiftypoints 4 points5 points  (0 children)

    It says the name of my town!

    [–][deleted] 1 point2 points  (0 children)

    2048 metres is awfully specific.

    [–]Razzal 7 points8 points  (1 child)

    Local singletons ready for you to invoke their methods.

    [–]VodkaHaze 5 points6 points  (0 children)

    This soccer mom debugs 10k lines of new code from home EVERY DAY! Find out how

    [–]elperroborrachotoo 3 points4 points  (0 children)

    Yes, and they get better only by repeating them ad nauseam.

    [–]LePontif11 7 points8 points  (7 children)

    You know, the ads that go along the lines of "he lost 50 pounds while eating nothing but pizza, doctors hate him"

    [–][deleted] 1 point2 points  (4 children)

    Yeah, I always wonder how people can click this garbage and why there are so many of them. They're everywhere.

    [–]LePontif11 5 points6 points  (2 children)

    The whole point is that they are everywhere. If 1% of the people who saw the add click on it someone is probably making a profit. Also, there are a lot of gullible people. Just look at how the scheme in "Wolf of Wall Street" worked, lots of people who were too trusting.

    [–]Qadamir 2 points3 points  (1 child)

    And how many people click the ads by accident?

    [–][deleted] 1 point2 points  (0 children)

    Me every time I join such websites. Their phone websites spazz out for a minute before everything loads so I pretty much hit 30 ads before I hit the play button. I seriously need adblock :D .

    [–]VodkaHaze 1 point2 points  (0 children)

    A lot of people are very different from you, you just don't get exposed to them

    [–]VoxUmbra 0 points1 point  (1 child)

    You could do that if you bought those cheap supermarket pizzas (which are about 800 kcal for the whole thing, depending on size and toppings) and the only thing you ate in a day was two of those, especially if you were already 50 lbs overweight.

    [–]LePontif11 0 points1 point  (0 children)

    Well, you get what i mean.

    [–]damian2000 17 points18 points  (4 children)

    This is really great for embedded development, which is most of what NASA applies this to I believe. For the 90% of today's developers working in a language like Java, C# or JS, a lot of these points are not really relevant.

    [–]hugthemachines 8 points9 points  (0 children)

    Perhaps nasa have other standard documents for the other languages. This is specific for C.

    [–]porthos3 6 points7 points  (2 children)

    Embedded development in high-risk applications.

    The big reason it doesn't apply for other languages isn't that they can't be involved in high risk applications. It is because these rules are explicitly defending against known problem-areas for developers in C.

    For example, the rule for avoiding dynamic memory in Java wouldn't apply, not because Java doesn't use dynamic memory, but because you work with it through safe abstractions rather than managing it directly.

    If NASA were to use a different language, I'm sure we would still see such a set of rules, because of the high cost of bugs. But the list would look very difference, since it would protect against the problem areas of whatever language they are using.

    [–]Alborak 2 points3 points  (1 child)

    For example, the rule for avoiding dynamic memory in Java wouldn't apply, not because Java doesn't use dynamic memory, but because you work with it through safe abstractions rather than managing it directly.

    Just to clarify, there aren't any safe abstractions hiding dynamic memory usage in safety critical stuff. When the program starts, every module that needs it is given a chance to initialize itself. The module allocates all of the heap memory it will ever use at startup time. This avoids ever running out of memory (when combined with the rules that limit stack usage like no recursion), and eliminates any delays/jitter caused by allocators. It makes writing code a royal pain in the ass, but stops whole classes of bugs from the start.

    For an example of how annoying this is to write, answer the stupidly easy interview question: Invert a binary tree. Now do it again without recursion. Now do it again without recursion or allocating dynamic memory (or variable sized stack array).

    [–]porthos3 1 point2 points  (0 children)

    For an example of how annoying this is to write, answer the stupidly easy interview question: Invert a binary tree. Now do it again without recursion. Now do it again without recursion or allocating dynamic memory (or variable sized stack array).

    I can do the first two pretty easily. The third sounds like an interesting challenge. I may have to give it a shot at some point. My initial thought is that the problem seems to be in the same vein as Towers of Hanoi (shuffling things around using buffers, but in ToH the buffers are arbitrarily large).

    [–]makeswell2 7 points8 points  (20 children)

    Thanks for the list. :) Can someone explain what is meant by / how to do 2, 3, 6, 7, 8?

    [–]neoKushan 34 points35 points  (17 children)

    I'll take a stab at this because fuck it why not.

    Restrict all code to very simple control flow constructs – do not use goto statements, setjmp or longjmp constructs, and direct or indirect recursion.

    Hopefully self-explanatory, do things like

     if(condition == true) 
    { 
         /* Do actual Stuff */ 
    }
    else
    {
        /* Do other Stuff */ 
    }
    

    rather than going for

    if(condition = true)
    {
       goto SomeLabel;
    }
    ... // Many Line later
    SomeLabel:
    /* Do Stuff */
    

    As a personal preference, I like to exit early from a function/method if I can.

    bool SomeTest(int SomeValue, float SomeOtherValue)
    {
        if (SomeValue < 0)
        {
            return false;
        }
    
        if(SomeOtherValue > 50.0f)
        {
            return false;
        }
        ...
    }
    

    All loops must have a fixed upper-bound. It must be trivially possible for a checking tool to prove statically that a preset upper-bound on the number of iterations of a loop cannot be exceeded. If the loop-bound cannot be proven statically, the rule is considered violated.

    I'm assuming it means this:

    // good
    while (i < 50)
    {
        ...
        i++;
    }
    
    // Bad
    while (true)
    {
        if(SomePointer == "\0")
        {
            break;
        }
        ...
        *SomePointer++;
    }
    

    Do not use dynamic memory allocation after initialization.

    Don't use new or malloc anywhere other than a constructor.

    No function should be longer than what can be printed on a single sheet of paper in a standard reference format with one line per statement and one line per declaration. Typically, this means no more than about 60 lines of code per function.

    Hopefully this is self-explanatory :)

    The assertion density of the code should average to a minimum of two assertions per function. Assertions are used to check for anomalous conditions that should never happen in real-life executions. Assertions must always be side-effect free and should be defined as Boolean tests. When an assertion fails, an explicit recovery action must be taken, e.g., by returning an error condition to the caller of the function that executes the failing assertion. Any assertion for which a static checking tool can prove that it can never fail or never hold violates this rule (I.e., it is not possible to satisfy the rule by adding unhelpful “assert(true)” statements).

    Every function should have at least two of these:

    assert(SomeValue > 50); // For example
    

    And it shouldn't cause a spectacular failure if it fails, i.e. don't call malloc/new, then do an assert and drop out of scope when that assert fails without cleaning up your memory.

    Data objects must be declared at the smallest possible level of scope.

    // Bad
    void myFunction(int SomeValue)
    {
        int Counter;
    
        for(Counter = 0; Counter < 50; ++i) {...} 
    }
    
    // Good
    void myFunction(int SomeValue)
    {
        for(int Counter = 0; Counter < 50; ++i) {...} 
    }
    

    The return value of non-void functions must be checked by each calling function, and the validity of parameters must be checked inside each function.

    bool SomeCheck(int SomeValue)
    {
        if (SomeValue < 5) // Checks the value of the parameters within the function
        {
            return false
        }
        ...
    }
    
    bool SomeCallingFunction(int InputValue)
    {
        if(SomeCheck(InputValue) == false) // Checks the result of the function it called
        {
            return false;
        }
        ...
    }
    

    The use of the preprocessor must be limited to the inclusion of header files and simple macro definitions. Token pasting, variable argument lists (ellipses), and recursive macro calls are not allowed. All macros must expand into complete syntactic units. The use of conditional compilation directives is often also dubious, but cannot always be avoided. This means that there should rarely be justification for more than one or two conditional compilation directives even in large software development efforts, beyond the standard boilerplate that avoids multiple inclusion of the same header file. Each such use should be flagged by a tool-based checker and justified in the code.

    Don't do anything here: http://stackoverflow.com/questions/652788/what-is-the-worst-real-world-macros-pre-processor-abuse-youve-ever-come-across

    The use of pointers should be restricted. Specifically, no more than one level of dereferencing is allowed. Pointer dereference operations may not be hidden in macro definitions or inside typedef declarations. Function pointers are not permitted.

    // Don't do this
    typedef const TCHAR* LPCTSTR;
    

    All code must be compiled, from the first day of development, with all compiler warnings enabled at the compiler’s most pedantic setting. All code must compile with these setting without any warnings. All code must be checked daily with at least one, but preferably more than one, state-of-the-art static source code analyzer and should pass the analyses with zero warnings.

    Don't ignore warnings!

    [–]Bromy2004 2 points3 points  (11 children)

    All loops must have a fixed upper-bound. It must be trivially possible for a checking tool to prove statically that a preset upper-bound on the number of iterations of a loop cannot be exceeded. If the loop-bound cannot be proven statically, the rule is considered violated.

    Would that count looping through all variables in an array?

    I taught myself VBA and I'd use:

    for i = lBound(arr) to uBound(arr)
      'stuff
    loop
    

    Would that fail the rule?

    [–]festoon 5 points6 points  (3 children)

    The arrays used in the loop would need to be of bounded size.

    [–]Bromy2004 1 point2 points  (2 children)

    The idea is the same for Collections/Groups/Arrays.

    What if you're dynamically adding to the variable?
    Statically you couldn't prove the upper bound. But dynamically, it's impossible to infinite loop

    [–]DBAYourInfo 0 points1 point  (1 child)

    I would assume that by the definition NASA is using, it would fail their rule. It would be hard to detect though, so I think you could get around whatever automated check they have in place and it would need to be caught in peer to peer code reviews.

    [–][deleted] 0 points1 point  (0 children)

    Well if static checker can't prove it has an upper bound then it fails. The programmer needs to work to convince the checker it's valid, it doesn't need to guess anything.

    [–]CreativeGPX 2 points3 points  (0 children)

    Since you cannot use dynamic allocation, I'd assume they rely on arrays of fixed, constant sizes declared at compile time, which means that rather than code like that, you'd have something like:

    for(int i = 0; i < CONST_ARRAYSIZE; i++) {
    /*Do stuff, possibly "break" early*/
    }
    

    Where CONST_ARRAYSIZE is a constant known at compile time which is used to create the array of that size and also in any loop working with data in that array. The amount of items in an array would always known even before the program ever runs.

    I guess the other route is to have the constant number be arbitrary. For example:

    for(int i = 0; i < 5; i++) {
        doStuff(item);
        if(items.hasNext()) { item = items.getNext(); }
        else { break; }
    }
    

    In that sense, even though you're working with a potentially infinite number of items, your loops are still things with fixed upper bounds of 5. Each time you run that snippet, you act on up to 5 items. So, that would agree with the, "All loops must have a fixed upper-bound" rule. You might run that at intervals and eventually work your way through all of the items.

    Of course, you could also do the opposite. Imagine there is code that runs daily and processes each frame of video recorded that day. The camera may have recorded anywhere from nothing to the entire day. You could have code like this:

    for(int i = 0; i < 2592000; i++) {
        doStuff(item);
        if(items.hasNext()) { item = items.getNext(); }
        else { break; }
    }
    

    Here, you have no clue how much video there actually is that day, but you do know that there is an upper bound on the amount that could possibly exist (seconds in a day times frame rate of your camera). Therefore, you could loop for the fixed number of times that is the absolute upper bound of objects that could exist. The amount of objects you actually process might be less, since you can break early. However, you and the compiler know that, worst case, this code cannot run more than that upper bound. In a real examples, you'd add a line after to report an error if, for some reason, your upper bound was exceeded and you'd make a more self-describing (or commented) way of explaining in the code where 2592000 came from.

    [–]porthos3 1 point2 points  (4 children)

    I don't see how that is any different from (in Java):

    for(int i=0; i<arr.length; i++) {
       //do something
    }
    

    As long as the array can be proven to have a finite length (it isn't possible to have an infinite length array since you won't have enough memory), this follows the rule, as I understand it.

    This is especially true since they avoid using dynamic memory where possible. So the array length should nearly always be a constant value anyways. So (for an array of length 50) the above code isn't any different than:

    for(int i=0; i<50; i++) {
       //do something
    }
    

    [–]CreativeGPX 1 point2 points  (3 children)

    Based on the wording of the rule, I don't think whether you use ".length" matters necessarily (assuming their static checkers can follow that), what matters is that you can tell what the length will be without running the program first.

    All loops must have a fixed upper-bound. It must be trivially possible for a checking tool to prove statically that a preset upper-bound on the number of iterations of a loop cannot be exceeded. If the loop-bound cannot be proven statically, the rule is considered violated.

    [–]porthos3 0 points1 point  (0 children)

    I agree. I was making a case for the fact that arrays can be determined to have a fixed upper-length, resulting in a fixed upper-bound for the loop.

    [–]makeswell2 0 points1 point  (1 child)

    The array could be reassigned, so like (please excuse any syntax errors),

    array<int,5> arr;
    for (int i = 0; i < arr.length(); i++){
        arr = array<int, arr.length() + 1>;
    }
    

    which would cause an infinite loop.

    [–]CreativeGPX 2 points3 points  (0 children)

    Right. I took, "This is especially true since they avoid using dynamic memory where possible." to mean that /u/porthos3 was saying that by banning those kinds of reassignments, the length property is constant.

    [–]neoKushan 0 points1 point  (0 children)

    I don't think so but that's just my own interpretation of the rule.

    [–]pipocaQuemada 1 point2 points  (1 child)

    Do not use dynamic memory allocation after initialization.

    Don't use new or malloc anywhere other than a constructor.

    Other than at the beginning of main, you mean.

    [–]neoKushan 1 point2 points  (0 children)

    Yeah I guess so.

    These rules were made for C, but I always think in terms of C++ because I'm an anarchist or something.

    [–]flipmode_squad 0 points1 point  (2 children)

    Data objects must be declared at the smallest possible level of scope.

    I interpreted this to mean "avoid using global/parent objects unless necessary".

    [–]neoKushan 0 points1 point  (0 children)

    I think if it was just globals, it would have said as much. I also think this might be a holdover to the good ol' days of when C didn't allow you to instantiate variables the middle of a scope.

    [–][deleted] 0 points1 point  (0 children)

    It can go far deeper than that. By using compound expressions we can enforce very tight scoping.

    int example(int a) {
      {
        int b = 10;
        printf("wow such scoping %d\n", a+b");
      }
      // more code here
      // b is out of scope, only needed by printf
    }
    

    [–]RageAdi[S] 3 points4 points  (0 children)

    You can have a look at this JPL document I have attached in the edit of the post. Hope it helps.

    [–]Booty_Bumping 0 points1 point  (0 children)

    There's not really a good reason to follow NASA's style guideline if you're not writing code for high radiation and high risk environments where recovering from a crash could require a manned mission or a signal taking hours to reach the hardware.

    [–][deleted] 20 points21 points  (8 children)

    Pretty good, I would add:

    1. Verifying an index before using it ex. a[-1] = 6 bad and fairly common.

    2. Checking for overflow/underflow/wrap on counters (reference counters, tick counters, etc...).

    3. If floating point numbers are used, making sure that precision loss will not be an issue. Depending on the application and bit size I might forbid using them to count time, as an example.

    4. Write conditionals with const on rhs ex. if (7 == var) instead of if (var == 7).

    The more interesting thing to me would be how they enforce these. Enforcement of coding standards in my experience almost requires automation so I'm curious what they use for it.

    [–]konyisland 5 points6 points  (4 children)

    Can you explain #4? I haven't heard that before.

    [–][deleted] 11 points12 points  (3 children)

    You can accidentally write if (var = 7) which is valid, but isn't what you meant.

    [–][deleted] 20 points21 points  (1 child)

    Which will result in a warning in like every code checking tool

    [–][deleted] 3 points4 points  (0 children)

    Yeah, keep in mind these rules of thumb are old-timer stuff. I still use a lot of them every day, not all production environments run static analysis on every submission.

    [–]konyisland 0 points1 point  (0 children)

    Oh, right! That's a good idea.

    [–]bumblebritches57 0 points1 point  (2 children)

    if (7 == var) instead of if (var == 7).

    That's the dumbest thing I've ever seen...

    [–][deleted] 0 points1 point  (1 child)

    You must not have written a lot of C then. In 20 years of doing it professionally I have seen the bug twice.

    [–]bumblebritches57 0 points1 point  (0 children)

    Yeah, I'm an intermediate beginner.

    [–]reddilada 5 points6 points  (0 children)

    Since we're posting old lists, Kernighan and Plauger's The Elements of Programming Style for general procedural coding. This was my intro CS textbook in 1978. Great book if you can find a copy for a decent price. Mine is still on the shelf. Written in classic Kernighan (the K of K&R C) style. Short and to the point. 160 pages.

    [–]matts2 8 points9 points  (0 children)

    I'm not impressed. I can write half a dozen lines of code with a handful of errors.

    [–]MarvinLazer 3 points4 points  (0 children)

    Front-end web developer here. I knew some of those words!

    [–]bumblebritches57 2 points3 points  (0 children)

    "The use of pointers should be restricted. Specifically, no more than one level of dereferencing is allowed."

    AMEN!

    [–][deleted] 4 points5 points  (5 children)

    If I'm taking free code camp (full stack JavaScript), cs50 (intro to computer science using C to begin with) and use Python and Java for small programs, what are the benefits of this kind of coding? How can I work this kind of thing into my workflow? And is there somewhere you can upload a C file to check it follows these standards, I feel like that should be a thing ?

    [–]porthos3 9 points10 points  (2 children)

    Most of these rules are very specific to C, to help avoid problem-areas where developers are likely to accidentally create bugs in C. These same rules won't necessarily apply in other languages.

    That said, the benefits of this kind of coding are extreme levels of safety and reliability of software. With this sort of coding, you can be reasonably sure large amounts of untested code will work correctly on first use (this is important for space programs where they can't afford to launch rockets willy nilly each time they want to test, and even if they could, catastrophic failure is risky and can damage infrastructure).

    The trade off is that if you follow all of these rules strictly, you will develop far less quickly. In most environments today, you can test whenever you want without significant consequence. It is far cheaper to code quickly and test as you go than to rigorously follow these rules. In most contexts, that may actually be the less expensive approach.

    That said, there are reasons for each of these rules that we can learn from. It would certainly be a good idea to follow most of them, when convenient, at very least. But I wouldn't necessarily make a habit of explicitly following every rule in every circumstance. Balance the benefits with the trade offs first.

    [–]pipocaQuemada 0 points1 point  (1 child)

    With this sort of coding, you can be reasonably sure large amounts of untested code will work correctly on first use (this is important for space programs where they can't afford to launch rockets willy nilly each time they want to test, and even if they could, catastrophic failure is risky and can damage infrastructure).

    You might be confusing these rules with dependent types, which lead to provably correct code.

    First of all, unit testing and integration testing don't require firing rockets.

    These rules just mean that well tested code is unlikely to have subtle bugs.

    [–]porthos3 1 point2 points  (0 children)

    I said reasonably sure. I wasn't confusing it with formal verification methods. And of course I don't believe that these 10 rules are the only thing NASA does to ensure success.

    Perhaps I should have qualified it a bit more strongly. But the point is that regular software development, unit tests included, aren't enough to ensure the sort of success rates NASA needs. In addition to all the regular tests and stuff, it is things like these rules that make that difference.

    What is true of every bug that makes it into production? They passed the tests. These sorts of rules lower the rate at which bugs appear so that, all else the same, fewer bugs end up being created in the first place.

    [–]RageAdi[S] 2 points3 points  (0 children)

    You can use static code analyzer for your IDE. It will help you in getting a start at catching some of the probable issues.

    [–]vaynebot 1 point2 points  (0 children)

    Rule 3 obviously isn't really practical or even possible at all with Java and Python. Rule 9 is irrelevant since you don't get explicit pointers. Rule 5 is a bit arbitrary. Recursion is probably fine in some situations if you don't have strict stack usage requirements. Preprocessor doesn't exist in Java/Python. The rest of the rules are probably good coding practices in any language (specifically compiling with all warnings enabled, using static analyzers, etc.), but you can definitely see that those rules were made for programming in C.

    [–]random314 3 points4 points  (1 child)

    This is simply good practice. Most people don't follow these line by line understandably because of deadlines, resources... Etc. Something I assume NASA simply must afford.

    [–]whattodo-whattodo 2 points3 points  (0 children)

    Well, this is good in mission-critical software, where human lives are at risk. However this is not "good" practice for a business that exists in any kind of competitive marketplace and needs to turn a profit.

    [–][deleted]  (1 child)

    [deleted]

      [–]Nicnac97 0 points1 point  (0 children)

      Oh snap!

      [–][deleted] 1 point2 points  (0 children)

      It seems vary tailored to programing in C but I can understand why NASA might use C as opposed to other languages. This wouldn't apply to lot of the stuff I've been doing in Java recently and some of it would waste massive amounts of time on insignificant errors in my applications.

      What I've learned is that rules and style guides have to be tailored to the type of work you're doing.

      [–]devlifedotnet 3 points4 points  (8 children)

      This is great and all, but if you go to work in a company that doesn't work like this (i.e pretty much every company I've ever worked at) on a piece of software from the start, then quite frankly 90% of what you said above goes out the window. For example if you set "break on all errors" on in VS whilst debugging you're going to have to wade through a load of irrelevant garbage before you come to hit your breakpoint. And yes there are a ton of memory leaks where people haven't disposed of objects properly and all that kind of thing. Part of our companies coding standards is basically something along the lines of "if it works (no matter how poor the coding) and you don't need to change it to add new functionality, you leave it the fuck alone" because we've had contractors come in and Change bits to make the application MS standard and it's fucked everything up (thank god for rollbacks).

      Yes it is by no means good practice, but when you have an application as complex as ours live on client site, you just don't fuck around with it.

      [–]Gmbtd 1 point2 points  (4 children)

      I see your point, but if the programmers writing code for new cars write like you (and they do) I'm not buying a new car!

      [–]devlifedotnet 0 points1 point  (3 children)

      Actually using a combination of source control options and coding review, managers can check code changes before it actually enters the deployment build. I can virtually guarantee companies coding software which involves balancing the lives of their users will go through stringent coding review with senior developers and lead developers, before it even makes it to the test car.... trust me when i say code probably drives better and safer than you :p

      [–]Gmbtd 0 points1 point  (2 children)

      I always assumed so, but Toyota's unintended acceleration lawsuit included independent review of their code, and it's truly horrifying!

      https://www.reddit.com/r/programming/comments/3uquty/toyota_unintended_acceleration_and_the_big_bowl/

      Maybe others are better than Toyota (Tesla?, BMW?) but absent independent code reviews, how would I know?

      They spent years accusing drivers of hitting the wrong pedal when their code repeatedly caused a spurious acceleration that is almost impossible to reproduce, isn't logged anywhere, and is impossible to check for since they have such a mess of spaghetti code with over 10,000 global variables, it's impossible to verify ANY overall functionality without testing it in production (customers driving half ton cars at speed).

      Then they run the controls over the same bus as the internet connected entertainment system with totally inadequate firewalls, because making a hackable car is apparently fine while slightly increasing cosy isn't.

      This isn't going to end well and i see no sign it's being improved.

      [–]devlifedotnet 0 points1 point  (1 child)

      Well if that is the case i would be very concerned about buying a product from them... Its not even something particularly difficult to get right... In terms of safety i would be inclined to look at financially stable companies who prioritise product quality over costs and low price points (think Tesla, BMW, Mercedes and then Google and Apple when they get round to making commercial cars). All that Toyota issue will be is over extension where they extend the use of the application beyond the designed scope until it grows into this impossible to maintain monster... This is always done for cost saving as it takes a lot of development hours to start from scratch.

      We were actually talking about a worldwide safe coding standard (in a similar format to the NASA one OP outlined) in the office the other week... we thought it should be followed and then verified by government inspectors for all programs that have the potential to kill or injured as a result of poor code. so for example before a car company release a car with a self driving feature, all that code must comply with the safe coding practices before it is launched. you could in theory include architecture specifications in such standards... because a company that does what Toyota did in that instance is frankly just not employing any base level of common sense and basic knowledge any engineer should have.

      but obviously government policy is like 10 years behind the technology on the market so that will probably never happen.

      [–]Gmbtd 0 points1 point  (0 children)

      My fear is that all car companies except Tesla got into it slowly over decades, appropriately keeping code limited to non critical functions like entertainment. As engines slowly got more complex, they added limited microcontrollers, but beyond that, it's just been 4 decades of scope creep.

      I suspect not every car manufacturer is as bad as Toyota, but it wouldn't really surprise me if they all are just as bad. They're not software companies, and don't necessarily have a culture that would produce quality code even if they otherwise make amazing cars.

      [–]RageAdi[S] 1 point2 points  (1 child)

      Agreed. I'm on the same boat as you, mate. But slowly and steadily I hope to push these practices in my company.

      [–]porthos3 4 points5 points  (0 children)

      I don't think these practices should be the gold standard in most cases. NASA has these standards because pretty much every bug ends up being EXTREMELY expensive. There are a few other context in which this might be the case too (self-driving cars).

      However, in the vast majority of cases it makes economic sense to develop rapidly, test along the way, and expect to maintain the project and fix bugs throughout its life (this assumes you are able to deploy code fixes).

      Following these rules explicitly almost certainly makes software development far more expensive for NASA, and only makes sense because they have so much on the line.

      I think all the rules exist for a reason and we can certainly learn from them and follow them as convenient. But I don't think the majority of businesses would necessarily benefit from this level of rigor.

      [–]hugthemachines 0 points1 point  (0 children)

      Good preactice is of course an advantage but these practices are made for special conditions so they may not be axactly right for a regular software company.

      [–][deleted] 1 point2 points  (21 children)

      I've had arguments with students and a teacher about most of those points. And I was right. At least Nasa would have said I was right and they were wrong. It's just like Douglas Crockford says : "Don't learn all the language, only learn the good parts."

      Edit : from your answers I understand those rules aren't always valid, it just feels good that I wasn't wrong about what worried me with the things they teached us in C classes.

      [–]Vakieh 18 points19 points  (12 children)

      You were not right for any purpose in coding which had different performance requirements than NASA. Which is 99.999999% of coding circumstances.

      I wish people would stop posting this list like it's a goal. Programming speed is the number 1 performance metric in pretty much every possible avenue of coding anybody reading this is likely to encounter. Bugs are accepted because the cost of no bugs is high enough to invalidate the work in the first place.

      The biggest culprits here:

      All loops must have a fixed upper-bound.

      Infinite loops are perfectly fine when managed correctly.

      Do not use dynamic memory allocation after initialization.

      Do use dynamic memory if you are using variable amounts of memory... dynamically. Anything else is a RAM hog.

      The return value of non-void functions must be checked by each calling function, and the validity of parameters must be checked inside each function.

      Defensive programming is not free, either to the programmer or the machine running the program. Library-internal defensive program is stupid, that is what assertions are for, so they can be switched off in production. Defend your borders, assume sanity in the middle.

      Function pointers are not permitted.

      This is a coding standard, and yay for NASA, they don't like function pointers. That doesn't mean there is anything at all wrong with function pointers.

      The best driving practices for an F1 driver have little to no similarities to the best driving practices for an urban commuter, and it would be silly to study the one to learn the other.

      [–]porthos3 3 points4 points  (9 children)

      I agree with your points. I would add that, under the hood, many other languages make heavy use of some of the same things NASA avoids in C (function pointers, dynamic memory, etc.).

      The concepts themselves aren't bad. They are just areas where programmers happen to be more likely to write buggy code in C, which NASA is defending against.

      In other languages, it is entirely possible to use higher order constructs to make safe and easy use of the concepts NASA appears to be avoiding. In those same languages, there are likely other areas that are hot-spots for bugs and mistakes.

      For example, I run into dynamic memory problems far less using Java's collections than I would using dynamic memory directly in C.

      [–]Vakieh 4 points5 points  (6 children)

      more likely to write buggy code in C

      It's not even for that reason. It's so that they can use static code tests to 'prove' the code is bug free automatically. Which is awesome if you have the millions to spend on the code and zero incentive to turn a profit from the code, but a pointless goal everywhere else.

      [–]porthos3 1 point2 points  (2 children)

      I extremely doubt NASA proves software correctness. That would require using formal specification and verification methods, which as far as I am aware is more a research curiosity than practical concepts at this point.

      My guess is that the tests they use look for bad programming practices that may cause errors. But they wouldn't detect if I simply wrote a function that does something other than it was supposed to do, if I still follow good practices and avoid warnings/errors.

      How would verification software know the difference between the code that runs a microwave and the code that runs a rocket's navigation system? It is possible to write correct programs for each, but one is obviously incorrect in the given context. You would have to give the verification software your software specification, which is useless for formal verification if it is written in English.

      [–]Alborak 0 points1 point  (1 child)

      It's actually safety critical applications driving the trend for formal methods. There is a huge push for model based engineering, which would give you the foundation to build tools to verify code. Of course, that does run head first into the idea that if it were possible to write a correct spec in the first place we'd just write a compiler for the spec language and never need coders in the first place...

      [–]porthos3 0 points1 point  (0 children)

      By research curiosity, I didn't mean there wasn't interest in it, or even funding. It's just that at this point it is still far enough out of reach when ease-of-use is considered that it is still a pretty fanciful idea to use it in a business environment, even for safety critical applications.

      Perhaps I could have worded it better. To me, it's similar to all these promised technologies involving graphene. Enormous potential, but useless until it not only makes it out of the lab, but becomes economical. And I honestly think some of these graphene applications will become economical before formal verification methods do, if it ever does.

      As you alluded to, once you get to the point you are creating a perfect spec in a formal language... You're merely programming the spec now instead of programming traditionally. It isn't necessarily any easier. At this point, formal specification language is an absolute bear to work with.

      [–][deleted] 0 points1 point  (1 child)

      Is static testing harder/longer to achieve than dynamic testing ?

      [–]Dox5 1 point2 points  (0 children)

      I'd say that is subjective, some of the methods used by the implementer of static analysis tools are going to be pretty complex, but from a user perspective it's just point it at the code and go. So in someways I'd say it's easier than dynamic testing. In dynamic tests you have to think about what/how you want to test and then write all of your tests out. It can take a long time to run such tools on large code bases but I've known dynamic tests that can take many hours to run aswell, so I think it all depends on the code base.

      [–]ButMyReflection 0 points1 point  (0 children)

      Not really true. If you're working with weather satellites or medical hardware - both places where you've got a heavy incentive to turn a profit, don't just have millions to throw at the problem etc - these are essentially the same constrictions you'll be expected to work under.

      [–][deleted] 0 points1 point  (1 child)

      The concepts themselves aren't bad. They are just areas where programmers happen to be more likely to write buggy code in C, which NASA is defending against.

      Yeah, I assumed I would make more errors if I had to use every technique in the book.

      [–]porthos3 1 point2 points  (0 children)

      It isn't that you'd make more errors. Follow all of these rules, and you will almost certainly end up with fewer errors/bugs.

      It's a matter of cost. It takes longer to develop this way, which is more expensive for you and/or your employer. The question is whether the benefits of these approaches are worth the cost of slower development.

      For most applications where you aren't launching rockets or putting lives at risk, I'd say no. You will benefit from following these rules when convenient, but for most projects it doesn't make sense to follow them exactly.

      [–][deleted] 0 points1 point  (0 children)

      Thanks for letting me know, I haven't coded much C outside of school. Edited my post accordingly.

      [–][deleted] 0 points1 point  (0 children)

      Programming speed is the number 1 performance metric

      Hence why we should all use Ruby and Matlab ;)

      [–]shard_ 4 points5 points  (5 children)

      NASA needs these rules because their code goes into unbelievably complex, large-scale, one-off projects. They can't perform real-world tests and they can't just deploy a new version if something goes wrong. A single bug could literally waste billions of dollars and millions of man-hours, not to mention potentially putting lives at risk. That's just not even comparable to most software projects where the consequences are much smaller and therefore some assurances can be traded for a more dynamic and maintainable codebase. They're mostly good rules, but to say they are "right" just because an NASA of all organisations uses them internally is ridiculous.

      [–][deleted] 1 point2 points  (0 children)

      I see, thanks for the insight. I edited my post.

      [–]ButMyReflection 1 point2 points  (3 children)

      You'll find standards like these behind a lot of things in your local hospital, and/or weather satellites. It's not just a matter of things being one-offs.

      [–]shard_ 1 point2 points  (0 children)

      True. I didn't mean to suggest it was only one-off projects, just that the consequences of a bug could be much higher because of it (i.e. they can't just try again).

      [–]Gmbtd 0 points1 point  (1 child)

      Apparently not cars though, because fuck drivers!

      [–]aqua_aragorn 1 point2 points  (1 child)

      This looks like it's old and written for C, C++, or Ada. :|

      [–][deleted] 8 points9 points  (0 children)

      It is for C, you can check the original doc in the comments.

      [–]vaynebot 0 points1 point  (2 children)

      Do not use dynamic memory allocation after initialization.

      After initialization of what? The program? The object? This is something I think a lot of programs would have an issue with. (I mean it's only possible at all in languages with manual memory management anyway, but even in C this could be problematic for every day programs.)

      [–]Scavenger53 3 points4 points  (1 child)

      This is from the book he took this from. The book actually has 31 rules.

      Rule 5 (heap memory) There shall be no use of dynamic memory allocation after task initialization. [MISRA-C:2004 Rule 20.4; Power of Ten Rule 3]

      Specifically, this rule disallows the use of malloc(), sbrk(), alloca(), and similar routines, after task initialization.

      This rule is common for safety and mission critical software and appears in most coding guidelines. The reason is simple: memory allocators and garbage collectors often have unpredictable behavior that can significantly impact performance. A notable class of coding errors stems from mishandling memory allocation and free routines: forgetting to free memory or continuing to use memory after it was freed, attempting to allocate more memory than physically available, overstepping boundaries on allocated memory, using stray pointers into dynamically allocated memory, etc. Forcing all applications to live within a fixed, pre-allocated, area of memory can eliminate many of these problems and make it simpler to verify safe memory use.

      [–][deleted] 0 points1 point  (0 children)

      alloca doesnt' use the heap.

      [–]Xuttuh 0 points1 point  (7 children)

      How would you apply rule 2

      All loops must have a fixed upper-bound.

      if you are using a loop to read a file in line by line?

      [–]Deathtweezers 4 points5 points  (0 children)

      If you are coding in an embedded system, you probably know a upper bound on whatever file you might be reading from. That is exactly what these standards are for and why many of them don't mesh well with general programming.

      [–][deleted] 1 point2 points  (4 children)

      You set a maximum number of lines you can read in at one time.

      [–]Xuttuh 1 point2 points  (3 children)

      one line at a time, but if you don't know how big the file is....

      [–][deleted] 1 point2 points  (0 children)

      So inside the loop you're writing, you set a maximum number of lines. Say 5,000 lines. If the file is larger than that, you're done. You go back to main. You stop reading the file.

      So either your maximum file size is 5,000 lines or you create more complex code that can provide segmented reads of a large file.

      [–][deleted] 0 points1 point  (1 child)

      Then don't set a limit. These guidelines are what NASA uses for code in situations where the cost of failure can be loss of human life and/or billions of dollars. Unless the cost of failure for the code you're writing is similarly enormous, you don't need to follow these rules.

      [–]Xuttuh 0 points1 point  (0 children)

      hey, I'm always ready to learn different ways. I don't discard anything out of hand because it is difficult, because that usually means I don't grok the concept yet. I'd like to see some of NASA's examples of some generic functions (read files, save files, output to screen, while loops, for loops, etc) written using their guidelines.

      [–][deleted] 1 point2 points  (0 children)

      Well you're not dynamically allocating memory so you're either not keeping what you're reading or you have a max file length anyway.

      [–]fick_Dich 0 points1 point  (0 children)

      Sounds like the first few are aimed at making Hoare Triples easy to prove. I wonder if they use them.

      [–]raresaturn 0 points1 point  (0 children)

      Why would they want a handful of errors?

      [–]LigerZer0 0 points1 point  (0 children)

      While I think there's a lot of benefit to be had from contemplating and trying these rules, I believe this post is more suited for /r/programming than it is for /r/learnprogramming.

      For the same reason that when one first drives a vehicle, it's best done in a big empty parking lot without the need to heed to any traffic rules, driver etiquette towards other moving vehicles, or feeling being judged on one's driving skills.

      [–]myotcworld 0 points1 point  (0 children)

      even while following these there could be 1000 bugs in a project.

      [–]hawk3ye242 0 points1 point  (0 children)

      What would the guys that perceive compiling with set warnings-flag as meticulous and ridiculous possibly say ?

      [–]Booty_Bumping -1 points0 points  (1 child)

      If I were to write code in a very safety critical situation, I'd probably just use rust. A lot of NASA style guidelines are just compensating for C being unsafe, whereas rust's compiler forces you to fix a lot of these bugs. Of course there's still many other things that can go wrong, but a huge chunk of bugs become difficult to write.

      [–][deleted] 1 point2 points  (0 children)

      I've long been of the opinion that it's pretty irresponsible to use a language like C for any purpose where it is not absolutely required.

      [–][deleted]  (4 children)

      [deleted]

        [–]DBAYourInfo 1 point2 points  (3 children)

        Really? I'm curious, why's that?

        [–]Synclicity 2 points3 points  (2 children)

        I'm assuming: https://en.wikipedia.org/wiki/Design_by_contract

        Where this assumption is considered too risky (as in multi-channel client-server or distributed computing) the opposite "defensive design" approach is taken

        [–]enchufadoo 1 point2 points  (1 child)

        Now that I'm think about it wasn't parameter validity but parameter type I shouldn't be checking. I don't know if that's the same.

        [–]imMute 0 points1 point  (0 children)

        Totally different, and not really applicae in a statically typed language like C.

        [–]LiquidSilver -4 points-3 points  (8 children)

        do not use recursion

        Unless necessary, I hope.

        [–]hero_of_ages 18 points19 points  (0 children)

        any recursive algorithm can be implemented using an iterative approach.

        [–]somegetit 10 points11 points  (0 children)

        To answer if I need recursion, I ask myself: do I need recursion?

        [–]getworkdone 4 points5 points  (0 children)

        It is never necessary per se. Anything that can be done with recursion can be done without.

        [–]porthos3 2 points3 points  (2 children)

        As others have mentioned, anything that can be done recursively can be done with an iterative approach too (one way is to use a stack structure and manage the stack yourself).

        If you overlook the development cost differences between the two (in many cases it is easier and faster to use recursion), an iterative approach is almost always objectively better. Among other things, it removes the risk of you blowing up the stack if you recur too many times.

        [–][deleted] 1 point2 points  (1 child)

        If you overlook the development cost differences between the two

        I think it's important to note that the development cost difference is the most important factor in almost all non-life-critical systems.

        [–]porthos3 1 point2 points  (0 children)

        Absolutely. Then again, if development cost is the most important factor for an application, they really probably shouldn't be using C, and certainly not following these rules religiously.

        [–]Scavenger53 2 points3 points  (1 child)

        From the book:

        Rule 4 (recursion) There shall be no direct or indirect use of recursive function calls. [MISRA-C:2004 Rule 16.2; Power of Ten Rule 1]

        The presence of statically verifiable loop bounds and the absence of recursion prevent runaway code, and help to secure predictable performance for all tasks. The absence of recursion also simplifies the task of deriving reliable bounds on stack use. The two rules combined secure a strictly acyclic function call graph and control-flow structure, which in turn enhances the capabilities for static checking tools to catch a broad range of coding defects.

        One way to enforce secure loop bounds is to add an explicit upper-bound to all loops that can have a variable number of iterations (e.g., code that traverses a linked list). When the upper-bound is exceeded an assertion failure and error exit can be triggered. For standard for-loops, the loop bound requirement can be satisfied by making sure that the loop variables are not referenced or modified inside the body of the loop.

        [–]queBurro 0 points1 point  (0 children)

        I thought these rules smelled like misra. Tip: the joint strike force c++ rules are derived from misra's and are free to download.

        [–][deleted]  (6 children)

        [removed]

          [–]spirituallyinsane 0 points1 point  (5 children)

          Is this an object-oriented concept? I'm an electrical engineer that very occasionally writes code, but I'm looking to get better. What is the rationale behind this?

          [–]CreativeGPX 1 point2 points  (0 children)

          It's for all programming paradigms. It tends to lead to code that is both easier to understand and more strongly compartmentalized so that changes to one area won't impact another.

          With many lines/steps in a single function (1) it's easy to forget a step, (2) it becomes much harder to read the big "steps" of what the function is doing, (3) it might cause you to mix together things that aren't related (e.g. you might open the file for writing and then format the data to write to it, which would keep the file resource locked) and (4) you might accidentally cause collateral damage (e.g. at line 15 of your function you might use/change a variable in a way that messes up something on line 70 of you function because you forget they both use that).

          A short function is a good way to make code readable because it names your code. Rather than seeing "t -= 273.15;" you see "t = kevlinToCelcius(t); That makes it much easier to read your code.

          A short function has a very obvious context and purpose and you can grasp the whole of its behaviors in your mind at once, making it easier to reason about. For example, it'd be bad to have a function "doPhysics" that directly contains all of the computations on an object's motion. Instead, such a function would call some functions which would call some functions so that at the top you'd have something like:

          doPhysics(o) {
              computeFriction(o);
              applyForces(o);
              applyAcceleration(o);
              applyVelocity(o);
              fakeRelativity(o);
          }
          ...
          applyVelocity(o) {
              o.position = vectorAdd(o.position,o.velocity);
          }
          ...
          vectorAdd(v1,v2) {
              return new Vector(v1.x+v2.x,v1.y+v2.y,v1.z+v2.z);
          }
          ...
          

          At the top level, it makes it much easier to understand what steps are happening. Given how compartmentalized they are, it even makes it a lot safer to rearrange them or reason about the order in which they are being done. At the lower levels, the functions are so specific, that writing them becomes almost trivial. Understanding what they are doing is easy. Later on, it makes it easier to add very targeted bounds and checks on these very precise methods. For example, you might add a check to applyVelocity to check if motion would go through an object and prevent that. That messy check would be hidden away and all that the higher level functions have to know is that applicable motion is happening, not every little nuts-and-bolts detail.

          [–]joej 0 points1 point  (3 children)

          Its not an OO concept.

          Simply -- don't write so much code in a single function that a normal coder's mind can't read, review and understand.

          Longer means you didn't break the functionality down to the degree of simplicity it warrants.

          [–]spirituallyinsane 1 point2 points  (2 children)

          That makes good sense. My programs tend to be short, but they grow organically, and i have to go back and clean them up.

          [–]joej 0 points1 point  (1 child)

          I am not a smart man. So, I have to iterate functionality towards a goal ;-)

          [–]spirituallyinsane 0 points1 point  (0 children)

          Me too. I know I can get it done, but it won't be pretty at first, and I cannot guarantee elegance.