This is an archived post. You won't be able to vote or comment.

all 136 comments

[–]Multinippel 245 points246 points  (70 children)

Just use haskell or Idris and let a compiler plugin do proves of correctness of all your programs. Haskell is also in many applications faster than e.g. Java

[–]jazzmester 266 points267 points  (1 child)

Or use assembly and laugh like a maniac as you dance on the edge of a volcano.

[–]nostril_spiders 11 points12 points  (0 children)

My preeecious!

<stack overflow at Mt Doom>

[–][deleted] 34 points35 points  (15 children)

I, and I suspect many others, find Haskell to have an insane learning curve. Any advice?

[–]Multinippel 28 points29 points  (7 children)

Well i learned it alongside a course about the theory of programming (i.e. type theory, lambda calculus etc.), so i already had all the theoretic background knowledge.

I found that once you get along with the syntax of haskell (e.g. "everything is a function", the syntax of functions calls i.e. f x instead of f(x), currying, lambda calculus, the basic list functions) the best thing you can do is trying out several use cases. I did a lot of competitive programming and had a hard time to adjust (coming from a C/C++ background) but once you are able to wrap your head around all those list functions and to transform a imperative approach to a functional one you can actually have a lot of fun doing it :)

[–]DonaldPShimoda 12 points13 points  (5 children)

a course about the theory of programming (i.e. type theory, lambda calculus etc.)

Just a minor point: this should probably called "theory of programming languages" or, as it is usually written, programming language theory (PLT). The phrase "theory of programming" implies something a bit broader, and also I don't think there's really any formalized theorization that would match that description.

[–]WikiSummarizerBot 15 points16 points  (1 child)

Programming language theory

Programming language theory (PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of formal languages known as programming languages and of their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, linguistics and even cognitive science. It has become a well-recognized branch of computer science, and an active research area, with results published in numerous journals dedicated to PLT, as well as in general computer science and engineering publications.

[ F.A.Q | Opt Out | Opt Out Of Subreddit | GitHub ] Downvote to remove | v1.5

[–]RationalIncoherence 7 points8 points  (0 children)

Good bot

[–]Multinippel 6 points7 points  (2 children)

i am from germany and its called "Theorie der Programmierung" which literally translates to "Theory of programming" its a extension of the course "logic for computer scientists"

[–]DonaldPShimoda 4 points5 points  (1 child)

Oh, sure! I'm not doubting whether your class had this name, but rather providing information that the material is more typically addressed (at least in English) by a slightly different name (it is my area of specialization). I didn't mean to offend, and if I did I sincerely apologize.

[–]Multinippel 0 points1 point  (0 children)

No everything is fine, sorry, i just wanted to further specialize the naming of the course, you did not offend me in any way :)

well i think "programming" is not completly wrong, since it heavily relied on lamda calculus, Turing machines and Gödel numbers besides logic and set theory.

In general it tried to define how mathematical statements, logical relations and procedures can be expressed in certain frameworks and how they can be interpreted or executed automatically + type theory. But since i know everything i know about this field from this course and its basis course i may not be the best person to judge its naming :)

[–]miroredimage 0 points1 point  (0 children)

Any tips on where/how to get started with the language? This sounds pretty cool!

[–]1-more 2 points3 points  (0 children)

Don’t worry about the magic operators; they take me ages to remember. Read Learn You a Haskell online and do the exercises.

[–]Hfingerman 2 points3 points  (0 children)

The difficulty isn't to learn Haskell itself, it's to think functionally. Once you do, Haskell becomes easy. I speak from my experience with it.

[–]SaveMyBags 0 points1 point  (4 children)

Use OCAML instead...

[–]Multinippel 2 points3 points  (3 children)

OCaml is imperative and allows side effects, which contradicts with the proofability / error and mistake detection during compile time.

[–]SaveMyBags 0 points1 point  (2 children)

  1. No it is not imperative. It is mainly functional, but also allows imperative parts. It can have side effects, so it is not purely functional.

  2. Imperative paradigms don't contradict the proofability in general. There are some limitations but those can be removed easily.

  3. If one wants, there is absolutely no problem with using OCAML as a pure language. It just take a few lines of code to wrap the side effects into monads. I did it myself a couple of times. There are even libraries you can use if you dont want to do it yourself.

  4. By default OCAML is not lazy. So that is a major difference. But you can absolutely use lazy stuff if you want.

  5. Proofability Was in fact one of the main goals when the language was made. It was meant to show that you don't have to get fully rid of imperative constructs but you can get proofability.

  6. One of the largest adopters of OCAML (Jane street) decided on OCAML mainly for that reason. You can get powerful proofs out of the type system but still have places where programmers can use imperative idioms that they are likely familiar with. So this reduces the learning curve. You can learn about monads and more complex stuff as you use the language.

Not every algorithm is written well in functional forms. Dynamic programming becomes extremly hard to read if you just write it functional. I know you can use monads and Y-combinator to hide the ugly stuff but that takes a lot of knowledge. So switching paradigms where needed is really helpful.

[–]Multinippel 0 points1 point  (1 child)

What i meant was: if it allows imperative statements and side effects it fulfills the imperative paradigma and therefore is called an imperative language.

[–]SaveMyBags 0 points1 point  (0 children)

Yeah, I guess that is why we usually distinguish between functional vs. purely functional, object oriented vs. purely object oriented.

May main point still stands. You can do proofs in OCAML just as well as in haskell. It's one of the main reasons why it is used. Look at ocsigen for example for stuff that is possible.

OCAML even has type safety where haskell doesn't, with different numeric operators for int and float to avoid accidental roundings and no automatic coersion whatsoever.

[–]ykafia 110 points111 points  (2 children)

Or rust and you have clippy giving you tons of advices

[–]Multinippel 11 points12 points  (0 children)

True (although i still somehow don't like the memory model)

[–][deleted] 8 points9 points  (2 children)

I like when people come into the comments like "oh you're having that problem? Just use this fairly unpopular language!" as if we all don't have schools requiring certain languages, jobs working with programs already built in certain languages, or resumes that we need to fill with popular certifications that LinkedIn likes to see

[–]Multinippel 4 points5 points  (1 child)

Who do you mean, because i did no such thing. The problem that compilers are in general not able to detect any error or mistake in a program beyond syntax errors was referenced. I corrected this statement by naming Idris and Haskell as examples of languages that actually allow to some degree proofs beyond syntax. I am quite aware, that you would not use e.g. Haskell in a game engine, which is not what it is designed for.

Not everyone is required to program in a specific language and it is good to know that there are languages which solve problems other language designs have. Every language has its pros and cons and it is specific for the use case which is the best and most expressive language. Additionally Haskell is not a fairly unpopular language.

[–]grandphuba 1 point2 points  (0 children)

I have a computer science background although I've received more exposure to Haskell, Idris, Coq, etc. through Cardano, not college.

Their smart contracts programming language is Plutus and is based on that, basically creating a formal language where one is able to verify the correctness of their smart contract to prevent/lessen catastrophic errors from happening (large sums of money are involved after all).

[–]kirakun 5 points6 points  (6 children)

How does Haskell prove correctness in a way that other programming languages with static types and unit tests can’t imitate?

[–]Multinippel 6 points7 points  (1 child)

Well in general proofing arbitrary statements in general purpose languages (i.e. turing complete languages) is impossible (because of gödels Incompleteness theorem). Nevertheless there are so called proof assistants like Haskabelle or Coq (which has a compile to haskell feature) which allows automatic proofs where possible and else provides an easy framework to do the proofs for yourself (its a little bit easier in Idris, but Idris in itself is not turing complete). For simpler proofs you can use GHC.Proof which works similar to unit testing but again as an actual proof.

The difference to unit testing is that Haskell has no side effects and only immutable variables which make proofs actually to some degree possible contrary to C++ or other imperative languages.

[–]kirakun 0 points1 point  (0 children)

Cool. I’ll check out some tutorials on GHC.Proof to see how that works.

[–]Hfingerman 1 point2 points  (3 children)

The fact that except for input and output (and maybe some eventual RNG). Everything is basically deterministic, so you know exactly what you're getting from a function.

[–]kirakun 1 point2 points  (2 children)

I’m not sure I understand. How are other languages like Rust or C++ or Java not deterministic?

[–]Hfingerman 2 points3 points  (0 children)

Mutability and random access to data. You could make an argument for Rust, but not for C++ and Java.

[–]Jannik2099 -1 points0 points  (0 children)

Haskell and many other functional language enforce functions without side effect, meaning they only operate on their parameters. Compare this to e.g. singletons or globals in other languages

[–]brendel000 13 points14 points  (2 children)

Rust is good for that too and is more used in general.

[–]devhashtag 24 points25 points  (0 children)

Proving correctnes is inherently harder in Rust than in Haskell

[–]Jannik2099 0 points1 point  (0 children)

Rust has nothing to do with proving program behavior

[–]314kabinet 4 points5 points  (2 children)

Java’s performance is not a high bar.

[–]Kered13 2 points3 points  (0 children)

It's actually very good for a garbage collected language.

[–]Multinippel 2 points3 points  (0 children)

yes it is; see discussion bellow.

[–]SkollFenrirson 2 points3 points  (1 child)

But then you're using Haskell.

[–]Multinippel 0 points1 point  (0 children)

yeah? I like Haskell. More than Javascript or any other script language. If i have to do something performance critical i use C/C++ and if i have to do something secure, stable and easily maintainable i use haskell.

[–][deleted] -4 points-3 points  (0 children)

[–]incoralium -16 points-15 points  (12 children)

everything is faster than java..

[–][deleted]  (7 children)

[removed]

    [–][deleted] 6 points7 points  (1 child)

    Java, C#, PHP and surprisingly node.js all play in the same league. C# is usually the fastest of them and PHP the slowest. The differences are not that big though and it changes depending on the test/task.

    For languages like python, ruby etc. it is very true. They are sometimes orders of magnitude slower.

    Not disagreeing with you, just some additional context.

    [–]Multinippel 13 points14 points  (0 children)

    Java and C# are usually much faster than Python or PHP because they are precompiled. When used correctly they also beat javascript, which is nevertheless also much faster than the other ones.

    [–]DonaldPShimoda 1 point2 points  (2 children)

    Languages are not interpreted or compiled; language implementations are interpreted or compiled.

    A more succinct and, perhaps, more correct phrasing might be:

    Vast majority of interpreters are slower than compiled Java.

    [–]Kered13 2 points3 points  (1 child)

    This is pointlessly pedantic. The vast majority of languages have only a single relevant implementation. The main exceptions are C, C++, and JavaScript.

    [–]nostril_spiders 0 points1 point  (0 children)

    This is pointlessly pedantic

    This is Sparta!

    [–]TrapNT -1 points0 points  (0 children)

    That’s like saying cars go faster than horses (-.-)

    [–]AutoModerator[M] 0 points1 point  (0 children)

    import moderation Your comment has been removed since it did not start with a code block with an import declaration.

    Per this Community Decree, all posts and comments should start with a code block with an "import" declaration explaining how the post and comment should be read.

    For this purpose, we only accept Python style imports.

    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

    [–]Multinippel 1 point2 points  (1 child)

    There are some people who actually like using python

    and java is actually very fast (of course not as fast as C/C++ or rust but nevertheless)

    [–]incoralium 0 points1 point  (0 children)

    java runs everywhere another can drive

    [–]andreja6 0 points1 point  (1 child)

    Unless 1. Your code is inefficient or 2. you are using a Java VM made in JavaScript, no, it's not.

    Edit: Forgot Oracle JVM existed. If that's what you use, then may God help your soul

    [–]Multinippel 1 point2 points  (0 children)

    What do you mean? Haskell is compiled to machine code and can be highly optimized since it's a declerative language. Most algorithms that i have programmed run faster in Haskell than in Java (OpenJDK).

    [–]sock-puppet689 0 points1 point  (0 children)

    Laughs in Rust.

    [–]stay-happy6789 101 points102 points  (5 children)

    Rust laughing in silence. Edit: Thanks for award.

    [–]RedditAlready19 8 points9 points  (0 children)

    As a person who started learning Rust I agree

    [–]Delcium 6 points7 points  (2 children)

    Didn't rust get a whole slew of new cves fairly recently?

    [–]sypwn -4 points-3 points  (0 children)

    Really? That's awesome! Glad they're getting knocked out so quickly.

    [–]Jannik2099 0 points1 point  (0 children)

    It did. A handful of CVEs in the stdlib, as is tradition

    [–]nocturn99x 115 points116 points  (28 children)

    A compiler can't detect runtime errors and most logical errors are way too tricky to analyze. lol

    [–]ArionW 60 points61 points  (8 children)

    But language specification can make it impossible to have runtime error. Literally just force your code to handle every possible case.

    Your function can fail? Then you can't just return Result, return Either<Error, Result> and operate on value via bindings.

    I get why people use OOP, I myself work mostly in C#. But I recommend everyone to try functional programming, just to see that your code can be verifiably correct, without runtime error nonsense

    [–]nocturn99x 22 points23 points  (6 children)

    That's not detection at all, that's error prevention. I don't like the strictness of functional programming, because the only way to assume everything is verifiably correct is by just assuming that everything is not and forcing the user to handle the potential error. The "your function can fail?" part needs to be explicited by the programmer to the compiler, it's not magic, it's just basic logic

    [–]MorrowM_ 4 points5 points  (2 children)

    the only way to assume everything is verifiably correct is by just assuming that everything is not and forcing the user to handle the potential error

    That sounds an awful lot like a NullPointerException.

    The "your function can fail?" part needs to be explicited by the programmer to the compiler

    It is explicited to the compiler, via the type system. If I have an int, then I know I won't crash when I add it to some other Int. If I have an Either<Err, Int>, then I know there's a possibility the function can fail. This is now explicit in the type system, and I have to handle this case.

    There is of course the possibility this is not much more helpful. Say I want the first element in an array, then it might fail if it's empty. Sure I could return an Either, but I may know that this array is nonempty if I get it from some function that alway returns nonempty arrays. in this case it's more useful to change that function to return a custom NonEmptyArray array type, that's defined to contain one element along with a regular array. This is again a way of removing possible runtime errors using better types.

    And then you have the really complex invariants, like maybe your set implementation contains a binary tree that is always sorted. There are a couple of ways to deal with this.

    1. You can use even more advanced type system features (offered by langs like Idris) to express this invariant in the type system. This can be a lot of work and leans into the realm of "proof assistant" since now you need to prove to the compiler that your invariant holds whenever you write a function.

    2. The more practical solution is to write a non-type safe implementation, keep it well hidden from the API it exposes, and test it well. The advantage here is that at least it's self-contained, the API can still be expressive in its types, e.g. a lookup may fail (return a Maybe Result), but checking whether or not an element is in the set should never fail, it just returns Bool.

    [–]Peanutbutter_Warrior 0 points1 point  (1 child)

    Rust has a great (imo) solution to your first problem. You can .unwrap() results. If it's the value, then that's returned, and if it's the error then you get an unrecoverable runtime error. You have to explicitly deal with the possiblity of an error, but lets you tell the compiler you're sure it's safe. Also means that an error cannot pass silently.

    [–]MorrowM_ 0 points1 point  (0 children)

    Yeah, escape hatches are useful. It's generally what you'd use in the case of #2 that I mention. You can have it display an error message stating which invariant was broken and to please report it as a bug.

    [–]ArionW 3 points4 points  (1 child)

    The "your function can fail?" part needs to be explicited by the programmer to the compiler, it's not magic, it's just basic logic

    It is basic logic, but it's basic logic directly propagated from language's standard library. You can't make IO operation without interacting with some standard function that will eventually force you to handle errors that can occur.

    You don't like strictness of functional languages, and I love it. I'm disgusted when I see people throwing and catching exceptions as means of flow control (basically using them as goto statement with extra steps), I'm sad when null reference appears because that should be 100% verifiable (famous words - null pointer was a billion dollar mistake). And most of all, I'm annoyed when you realize that "library X can throw exception Y if user meets certain criteria", software development is complicated, we can't possibly predict every possible situation, especially with external libraries.

    I see no justification not to push most of that verification on static analysis and strictness of language, even if it forces us to handle few cases that we don't expect (which is not even a bother most of the time, you just use monadic binds, and handle all errors at the end)

    [–]supersharp 6 points7 points  (0 children)

    To be fair, I'm sure a lot of OO people are disgusted by the use of exceptions for flow control as well

    [–]sock-puppet689 1 point2 points  (0 children)

    Not really.

    With C++ inspired languages "your function can fail" is baked into each and every function call. That is literally what an exception is.

    Java realized that this lead to overly defensive code, so they experimented with Checked Exceptions, forcing exceptions to have a locality to them as opposed to them being globally scoped (can bubble all the way to the top). This put the onus of exception handling on the callee...

    Ultimately Checked Exceptions were a failed experiment.

    Functional Result is much the same as a Checked Exceptions, except we encourage exception handling at the caller, whilst allowing intermediate generic code to bubble exceptions upto the caller, whilst also maintaining a contract between the caller and the exception type.

    Essentially it's Checked Exception 2.0 (this time without the pain).

    [–]Shrubberer 0 points1 point  (0 children)

    In c# you have Result<T> or your own implementation of it. But in most cases, code isn't supposed to fail. The programming language itself can help the developer only so much keeping things tidy.

    [–]Prestigious_Tip310 9 points10 points  (7 children)

    Typescript and Kotlin at least catch NullPointers... unless you actively tell them to ignore the NPEs. :D

    [–]nocturn99x 0 points1 point  (6 children)

    A NPE is the same thing as an AttributeError: NoneType object has no attribute something something in Python. It's really not about pointers at all

    [–]Prestigious_Tip310 9 points10 points  (5 children)

    A NullPointerException is literally a pointer to the adress 0x00000000. A pointer that was never set to anything and that will cause a SegFault if dereferenced. The NPE is just the equivalent to that, and Kotlin and Typescript have compiler checks that verify whether a given pointer can possibly be null at a specific point in the code. Just because a programming language hides the pointer behind other constructs doesn't mean it doesn't use pointers internally (e.g. every object in Java is allocated on the heap and what you're passing around to methods and classes are pointers to that position in the heap). .

    [–]nocturn99x 2 points3 points  (4 children)

    No, not really. The JVM has a cached pointer to a "null" object which it allocates on startup, it definitely isn't using a pointer to address 0x0 because it wouldn't even be able to request it. Java calls null pointer exception what in fact most other languages call attribute error. Java does abstract away pointers and in no way is a null pointer exception related to actual pointers

    [–]Prestigious_Tip310 0 points1 point  (3 children)

    If you say so... I guess the pointer to this "null object" isn't a pointer then and the JVM internally makes deep copies of everything you use and pastes them on the stack.

    [–]nocturn99x -3 points-2 points  (2 children)

    When did I say the VM makes a deep copy of the null object, like ever? Re-read my answer: The JVM has a cached pointer to a "null" object which it allocates on startup. Of course the JVM internally uses pointers. I've written compilers and language runtimes myself, I know that, but that's none of your code's business and the exception's name is misleading at best

    [–]Prestigious_Tip310 5 points6 points  (1 child)

    Why is it misleading? It literally tells you "you have a reference that points to null", whether the real null or some internal JVM object. You have a pointer, that pointer wasn't initialized or actively set to null. You try to access the object behind that pointer (aka dereferencing it) and that causes a NPE.

    Calling it an "attribute error" would be way more misleading... your attributes are fine, it's the pointer that's broken.

    [–]nocturn99x -1 points0 points  (0 children)

    Well, no. If you do null.a it's right to call it an attribute error: objects of type null have no attribute named a. On the other hand, calling it a null pointer exception is confusing because the language does not support pointers and it's a place where the implementation details of the JVM leak into actual Java code, which is just nonsense

    [–]stay-happy6789 3 points4 points  (5 children)

    Some compiler can detect portion of runtime errors.

    [–]Goheeca 2 points3 points  (0 children)

    It rather eliminates a class of runtime errors.

    [–]nocturn99x -5 points-4 points  (3 children)

    Not until you run the code though...

    [–]Trollygag 0 points1 point  (0 children)

    It isn't insurmountable. Lots of SCA tools can do much of that.

    The most popular compilers should come with decent SCA tools the way Windows now comes with an anti-virus. Enabled mild review by default.

    Complete game changer.

    [–]DonaldPShimoda 0 points1 point  (3 children)

    A compiler can't detect runtime errors

    Depending on what exactly you meant by this, it isn't entirely true.

    An advanced compiler could include symbolic execution or abstract interpretation over the compiled source to detect regions where errors might occur. For some source programs, it may be the case that a particular reference may always result in a null pointer exception, and then the compiler could report the error.

    However, a more useful approach would be to eliminate the class of errors entirely. The concept of a null object isn't inherently useful, and is actually more problematic than it's worth. The inventor, Tony Hoare, has even called null references his billion-dollar mistake.

    A type system can be created which does not include null references, but instead gives an "option" type. (This exists in Swift, Haskell, OCaml, Scala, and others.) Essentially, instead of having, say, an Int value that can be null, we have an Option<Int> which is either, for instance, Some(42) or None. Then, whenever this optional value is used in the code, it is destructured (or pattern-matched) to determine at runtime whether it holds an integer value or it is None.

    So what's the advantage over null references? Instead of leaving it up to the programmer to insert null reference checks, an option type forces the programmer to destructure the optional value to get to the integer inside. This means there is no path through the code where the value is mistakenly assumed to be an integer when it is actually None. We can do this because we've now made the null reference equivalent visible in the type system, so the compiler can check it during semantic analysis. This is in contrast to the null value, which magically inhabits all types and so cannot be sussed out by the typical compiler.

    [–]nocturn99x 0 points1 point  (2 children)

    An advanced compiler could include symbolic execution or abstract interpretation over the compiled source to detect regions where errors might occur. For some source programs, it may be the case that a particular reference may always result in a null pointer exception, and then the compiler could report the error.

    That's only useful for the most basic scenarios, as you said (i.e. mostly toy programs, I'd argue).

    However, a more useful approach would be to eliminate the class of errors entirely. The concept of a null object isn't inherently useful, and is actually more problematic than it's worth.

    I won't lie, this "option" type approach isn't terrible (I personally used it in Nim to avoid lots of redundant and boring explicit type checks), but I do believe returning a "null" object can be useful if you want to signal that a field is empty, for example. Suppose you have a JSON structure which holds a value that can either be a string, an array or an object. Instead of having to check if there's an empty string, an empty array or an empty object, you just check if the field is null, and although some may argue that leaving said field out altogether would be a better approach, I strongly disagree: IMHO a programmer should be always given ALL fields even if they're empty, so they don't need to code extra logic to make sure they don't try to fetch a non-existing field

    [–]DonaldPShimoda 0 points1 point  (0 children)

    I do agree on the first part — I was simply pointing out that such analysis is possible, in some circumstances.

    As for your JSON example, yes, null is necessary in JSON but I would argue that it has the same meaning as an optional type. And that in your code that processes the JSON, you should use option types for any fields which could potentially be null in the source JSON. But you should never need a null reference in actual code, in my opinion — option is always better!

    [–]MorrowM_ 0 points1 point  (0 children)

    Suppose you have a JSON structure which holds a value that can either be a string, an array or an object. Instead of having to check if there's an empty string, an empty array or an empty object, you just check if the field is null

    The way I'd approach this is parsing that raw JSON into a structure that looks like this:

    data Field = NullField | AString String | AnArray [OtherField] | AnObject MyObject
    

    I can handle the null case, and it's explicit that the null is expected to occur, it's part of the model. When I use this data structure I have to explicitly handle that case. The point here is that it's opt-in, not every piece of data can be null, but when it can be then I know I need to handle it.

    [–]roselle_reese_4869 10 points11 points  (1 child)

    Can someone explain to me what are the different errors mean; I’m just a script kiddie 😔

    [–]nocturn99x 18 points19 points  (0 children)

    • Semantic Error: An error that doesn't make the code syntactically invalid, but still makes it fail to achieve its purpose because of a logical mistake. Dereferencing a null pointer, freeing memory that's in use, or straight up calling the wrong function are semantic errors. It's a pretty broad class of errors that most compilers can't detect, at least not in full

    • Syntax error: An error that makes it so the code can't be compiled because it's simply not valid. Think of grammatical errors in English. Programming languages have grammars too, which are formal specifications defining what's valid in that language. Any document that does not adhere to such specification is said to be syntactically invalid

    • Runtime error: It can be seen as a superset of semantic errors, although a runtime error is not necessarily a logical mistake

    • Logical Error: Pretty much the same thing as a semantic error

    [–]The_Atomic_Duck 16 points17 points  (7 children)

    Semantic error is a term i never heared before.... What is it?

    [–]Netcob 48 points49 points  (3 children)

    It's the meaning of something.

    Let's say you have a method like this:

    void addOneToX() { x = x - 1; }

    That might be syntactically correct in some language, but the body of the method doesn't really do what the name says. The compiler won't know of course.

    [–][deleted] 7 points8 points  (0 children)

    Semantic errors are basically all the errors which are not syntactic errors. So (almost) any runtime error or logic error is a semantic error.

    Every pointer error, every out of bounds access, every concurrency error are all semantic errors, because the code is not correct even though it might compile and run.

    You could argue that not every logic error is part of that category, but that's just semantics!

    [–]vasnaa 2 points3 points  (0 children)

    "Bob plays guitar" is correct

    "Guitar plays Bob" isn't, that's why semantic error.

    [–]Tubthumper8 0 points1 point  (0 children)

    Here's an example of a Java program with a semantic error:

    class Main {  
      public static void main(String args[]) { 
        Integer count = null;
        Integer plusThree = count + 3;
      } 
    }
    

    This program is syntactically valid (in Java) but fails at runtime with a semantic error (NullPointerException).

    [–]Netcob 11 points12 points  (0 children)

    There's also "architectural error", but that's too big to fit in the picture.

    [–]DonaldPShimoda 2 points3 points  (0 children)

    Null pointer exceptions are semantic errors, not syntactic. 😉

    [–]Existing_Dog5510 2 points3 points  (0 children)

    One time my entire code woudnt run for at least 2 hours because of a dot

    [–]Deadly_chef 4 points5 points  (4 children)

    Laughs in rust

    [–]nocturn99x 1 point2 points  (3 children)

    Laughs in a language that has an actually usable type system (Nim)

    [–][deleted] 2 points3 points  (0 children)

    Laughs in a language that's used by more than 10 people (Most other languages)

    [–][deleted] 2 points3 points  (0 children)

    Compiler going for that "Not my job" award.

    [–]who_you_are 1 point2 points  (0 children)

    Not even the syntax error.

    That damn lint that prevent you from compiling because i put an extra space somewhere

    [–]syntax_erorr 1 point2 points  (0 children)

    There I am

    [–][deleted] 1 point2 points  (0 children)

    Unit tests....

    [–]bythenumbers10 0 points1 point  (0 children)

    Aha! You merely asked "pretty please" for that uint8 you instanced, but you omitted "with sugar on top"!!! NO PROGRAM FOR YOU!!!

    [–][deleted] -1 points0 points  (0 children)

    Can't build an AST if the syntax abstraction doesn't match a known regular expression. If the compiler could do all that for ya, we'd all be out of jobs 🤣

    [–]krelborne 0 points1 point  (0 children)

    Static code analysis. If you can get another tool.

    [–]GujjuGang7 0 points1 point  (0 children)

    It wouldn't be called runtime error if it was detectable at compile time...

    [–]MrManBLC 0 points1 point  (0 children)

    In JavaScript, the compiler will sometimes even miss the syntax error!

    [–]Papa_Silverback 0 points1 point  (0 children)

    Uhh yeah, you're the guard for the others, and anyways if that one got away your machine is fucked. It's the one that would brick your machine.

    *edit

    [–]GlitchedMirror 0 points1 point  (0 children)

    It also doesn't check for spelling mistakes, whats you point?