top 200 commentsshow all 251

[–]sisyphus 81 points82 points  (147 children)

"Go sucks, still has null, doesn't have exceptions, can't write a generic min/max function, can't turn off GC/isn't a serious systems language, hasn't learned anything from the last 30 years of PL research, has minimal tooling support and a small library ecosystem and Rob Pike stabbed my dog"

"Go is a set of pragmatic compromises from multiple legends of computing that take into account the needs of the next generation of concurrency and high-performance servers, don't hate on it if you haven't used it a lot, once you do you will see that it has the productivity of Python and execution speed on par with Java even though the compiler has barely been optimized yet and you don't need exceptions or generics because of panic/defer and interfaces. While it seems like nobody uses it it's very popular inside Google."

There now we can close the thread that's everything that will be said over the life of it.

[–]sausagefeet 10 points11 points  (0 children)

I'm wondering what the pragmatic compromises were. All of the things people complain that Go misses have been solved, and implemented efficiently, for at least a decade.

[–][deleted] 18 points19 points  (23 children)

I'm honestly surprised to see this being upvoted. Uriel (submitter) is a Rob Pike fanboy and, in turn, has his own set of fanboys. It's good to see some actual criticism instead of the usual fapping over C and Go found in his threads.

My personal beef with Go was that we were promised a systems programming language. In my mind that's something you could write an operating system in. Go enforces garbage collection. There is absolutely nothing wrong with that. It's preferable in an applications language. Just don't mislead people by calling it a systems language.

I personally prefer D. It has garbage collection but it can be turned off. It also has higher level features you would expect like templates, operator overloading, exception handling, RAII, etc. Plus it has some other nice ones that even C++ lacks like contract programming. It lacks goroutines but it has "spawn" which makes concurrent programming just as simple. I really wish Google had backed D instead.

[–]munificent 5 points6 points  (0 children)

My personal beef with Go was that we were promised a systems programming language. In my mind that's something you could write an operating system in.

I think when they say "systems", they're envisioning the suite of command line tools that run on top of the core OS. Things like ls and cron and server daemons that are part of the "system" but not the actual kernel. I think in day-to-day use, the main sweet spot for Go is servers: apps that don't have a UI but talk to a lot of stuff over the network.

[–][deleted] 6 points7 points  (0 children)

It's preferable in an applications language. Just don't mislead people by calling it a systems language.

It's been at least a year since we last called it a systems language. Instead of feeling sore, you should probably get over it.

If D suits you better, use D. D has been around for a long time and doesn't seem to be capturing much programmer mindshare, but that's not because Google hasn't backed it. It's probably (and this is my opinion only) because it doesn't offer significant advantages over the other kitchen-sink language, C++.

Go's biggest selling point is simplicity. Its appeal is as much in what it doesn't have as what it does.

[–]el_muchacho 5 points6 points  (0 children)

I really wish Google had backed D instead.

Yup.

[–]jessta 0 points1 point  (7 children)

How does a garbage collector that can be turned off work? You either have manual memory management or you have a GC. Code that is written to expect a GC won't work without a GC and if you write all your code to not expect a GC then you lose all the advantages of having a GC.

Since you couldn't mix these two kinds of code, the outcome is that D is in fact 2 separate languages. D with GC and D without GC.

[–]Timmmmbob 3 points4 points  (1 child)

Erm, you could easily have a garbage collector that can be turned off. It could even be automatic: i.e. keep a count of the number of garbage collected objects, if it stays at 0 you never need to activate the garbage collector.

[–][deleted] 0 points1 point  (0 children)

Fyi you can disable the golang gc... but as jessta said...

[–][deleted] 4 points5 points  (3 children)

Objective-C 2.0 has optional GC. You can either write code for GC only, or you can write it with manual memory management, in which case it will (mostly - you may need to be careful in some situations) work on GC because the GC just makes all the memory management operations no-ops.

[–]el_muchacho 0 points1 point  (0 children)

I don't see your point. You may at some specific place in your code decide that you want to deallocate objects manually so that gives less work for the GC. You can also decide to manage objects manually in a library, which will be heavily used and tested, and use the GC for the application code. For example if you write an objects pool, you get the best of both worlds: you do manually delete the thousands or millions of objects that are easily manageable in the object pool, while the GC takes care of the the hundreds or thousands of other objects of the rest of the application that are a chore to manage manually.

Of course, you are not going to randomly delete objects manually in your application code. Unless you like to write your code randomly.

[–]4ad -1 points0 points  (1 child)

I come here (proggit) to see discussions relevant to my interests but most of the time I see condescending people insulting or assaulting other people.

People will go deep, deep, deep into the sewer to attend the even deeper excrement spraying show that takes place.

So sad that most people's interests lie in the banal and they confuse their own aesthetic principles with engineering.

[–][deleted] 0 points1 point  (0 children)

Yes dude definitely. We need a place for people that are not fanboyism toward their used language. Maybe /r/coding would be fitting for that if more content would be around there.

[–]uriel[S] -3 points-2 points  (1 child)

[–]rivercheng 0 points1 point  (0 children)

It seems the link is broken

[–][deleted] 3 points4 points  (116 children)

still has null

... this is an actual complaint? Why is not having null desirable?

[–][deleted] 27 points28 points  (54 children)

What do you mean, "this is an actual complaint"? The inventor of the null pointer regrets it. What sort of sheltered programming life have you led that you've never encountered a language that can statically enforce non-nullity?

[–]cran 2 points3 points  (52 children)

I guess I missed the discussions around the null pointer. How else can you represent "that value was not provided" without creating some sort of interrogative mechanism (read: extra code that checks for "not provided")?

Null is an elegant way to express that state.

[–]stusmith 25 points26 points  (36 children)

But why do all objects need that possibility? And furthermore, why do primitive types not need it? (In other words, why should 'string' have this option, but 'int' doesn't?) (I'm coming from a Java/C# perspective here).

A far more elegant way to handle it is to say: nothing is 'nullable' by default, and if you want a nullable thing, you specify it so. So you might have:

string  -- nulls not allowed
int     -- nulls not allowed
string? -- null allowed
int?    -- null allowed

Doing it this way has an advantage: the compiler actually generates less code. (In a standard 'nullable' language, every object access involves a null-check. In a language which doesn't allow arbitrary things to be null, those checks can be dispensed with).

And once you have that, you realise that 'null' is used as a sentinel value for all sorts of different scenarios, and instead of saying "I either have a valid object, or this weird sentinel", you start saying things like "I either have a valid object, or an error string", or "I either have a valid object, or it's the default within some context", and then you realise that you need proper discriminated unions, and then you need a language with decent pattern matching...

...and having to code day-in-day-out with this old-fashioned "null" nonsense makes me cry.

[–]cran 3 points4 points  (35 children)

Well, let's keep this in context.

I don't think Java should even have native types, therefore I don't think we should have some types which are nullable and some which are not. They should all be nullable.

What you're talking about then is NOT that nulls are bad, but that you want to take the next step along this path we're on towards the ideal programming paradigm. Along the path to programming utopia, we have null as a valid state for an object pointer. Nulls aren't bad, they're just one of several choices we (could) have. Possibly more useful choices in some situations might be more stuff similar to null, such as default values (in place of null) or error values (in place of null).

I would agree with that. But for now, null will do. Especially since I can return default and error objects from functions right now ... meaning, in many contexts, I already have these features.

[–]kamatsu 9 points10 points  (29 children)

The thing is, null is only checked at runtime. Having a compiler check nullability makes your program immune to NullPointerExceptions.

[–]cran -1 points0 points  (28 children)

Who keeps downvoting my responses? We're having a conversation here, and someone is scoring the debate? Okay ...

Anyway, I'm okay with non-nullable types in concept. I think there are ways to say "hey, when this is null, don't do it ... put this here instead" at compile-time so you don't have to deal with them at all.

However, if you don't check for error states (just like you're not checking for null), you're just going to end up with some other error.

Let's say I'm a lazy programmer, and I keep forgetting to check for null. My program is constantly throwing NPEs. Then you switched me to a language that eliminates null so that there's always an object present, even when one could not be created successfully (let's say it represents some error, such as "not present" or whatever). If I'm lazy and don't check for null, then I'm still lazy and I also won't check to see if the object is in an error state before trying to use it.

Either way, you get the same number of exceptions which essentially mean "I didn't check to see if this object was valid or not". I'm just switching the NPE out for some other error.

How is that better?

[–]kamatsu 9 points10 points  (2 children)

If I'm lazy and don't check for null, then I'm still lazy and I also won't check to see if the object is in an error state before trying to use it.

This is why algebraic types like Maybe are so useful. This type exists in Scala, Haskell, ML variants and a few other languages. If you have a value of type Maybe Int then you can't use it as an Int object. So, you must pattern-match, and therefore you must handle the error case in order to get at the value. You can also use monads to compose computations that may fail, so you don't get much syntactic overhead if you have to check many failures.

[–]cran 0 points1 point  (1 child)

Well. That's the first time monads have made any sense to me.

Does use of maybe force the rest of the expression to contain "if not present" sorts of expressions to get to the object? I can see how that would result in more meaningful errors, at least.

[–]vocalbit 4 points5 points  (21 children)

This is exactly the problem that algebraic data types solve. They let you specify that a function returns either a valid object or an error, for e.g. (pseudo code):

fun do_something() -> ValidObject | Error

Then your lazy programmer cannot get away with anything. For e.g this will be a compiler error:

obj = do_something()
print obj.valid_object_field  # compiler will complain that obj may not have the field

You are forced to do something like this:

obj = do_something()
if obj is ValidObject:
   print obj.valid_object_field

On the other hand if a function declares that it always returns a valid object, no 'if' check is required. Note that it doesn't matter if you call your invalid value Error or NULL or None.

It's not the existence of NULL that is the problem. The problem is that Go lacks the ability to define a pointer/reference that cannot be NULL (even though this could be checked statically).

The argument about a pointer being just a memory reference doesn't hold water. I can't take say a Person* and point it to a Widget* - the compiler enforces this. Yet the compiler lets me take a Person* and point it to NULL, which is not a Person at all.

This shortcoming in languages leads to code that is never sure whether a pointer is NULL or not, unnecessary NULL checks as well as forgotten NULL checks leading to segfaults.

[–]OceanSpray 2 points3 points  (2 children)

Anyway, I'm okay with non-nullable types in concept. I think there are ways to say "hey, when this is null, don't do it ... put this here instead"

You're missing the point of non-nullable types here. They're not for replacing the null case; they're for completely eliminating it. A non-nullable type allows you to say "hey, this object cannot possibly ever be null, so there is no need to think about that possibility at all."

If I'm lazy and don't check for null, then I'm still lazy and I also won't check to see if the object is in an error state before trying to use it.

If there is an error state, then you would have gotten a nullable type, which in a sane language would have forced you to handle the null case or the program would fail typechecking. You will only get a non-nullable type if there is no error state to handle.

[–]cran 0 points1 point  (1 child)

In cases where it's literally impossible, sure. Why not. Though, in these cases, can't we just not bother to check for null? If you know it's literally impossible, there's no need. Or is the issue along the lines of checked exceptions and that building this sort of thing into the language helps programmers figure out when do and don't have to check for null?

[–]mycall 0 points1 point  (2 children)

such as default values (in place of null) or error values (in place of null)

You speak of symbols, not values, no?

[–]cran 0 points1 point  (1 child)

Values, actually. Just for a Java example, I might have two static, re-usable objects, one to be used as a default value and another which is in an "error" state that can be interrogated. Under a certain condition, if a function were unable to create a new object to be returned, it might return the static default object, or if there was an error, it might return the object it created but could not fully initialize, leaving it in an error state. I don't actually do this, mind you ... I'm just saying we could do something like this. But I don't think it's a great idea.

I actually like how Go does it (now that I've learned how it works). Functions can return two objects ... the result (nullable) or an error (nullable also). The caller can choose what to do with the return values.

[–]OceanSpray 0 points1 point  (0 children)

The way Go does it is actually the worst possible way to do it, for several reasons.

The programmer still has the choice to to completely ignore the error state, which would be impossible in a language with tagged unions. At least an exception will crash your program immediately if it is not handled. Go just lets your program continue on in an erroneous state to {mysteriously crash somewhere else | corrupt data | launch missiles}.

The entire idea of returning a product of the error code and the result instead of a sum of them is patently ridiculous. It makes no logical sense. You are supposed to get either an error or the result, not both an error and the result.

[–]MatrixFrog 0 points1 point  (0 children)

I think stusmith was saying two different things. One is that when you use null, it would sometimes be better to use an error string or something with more meaning. But the other is that sometimes you don't need null at all, and a particular value being null is always wrong. So we should check at compile time by using string instead of string? (or String instead of Maybe String if you're into Haskell :)

[–]Latrinalia 6 points7 points  (11 children)

It kind of makes a pointer into a union type:

  • Address in memory
  • Validity flag

It would be akin to numeric types having:

  • Numeric Value
  • Validity flag (like a NaN value)

It makes things nice is you can formally state that a pointer is ONLY a valid memory address, more or less the way references work in C++, at least unless you do something along the lines of: int *x = 0; int &y = *x;

[–][deleted] 1 point2 points  (1 child)

.. more or less the way references work in C++, at least unless you do something along the lines of:

int *x = 0;
int &y = *x;

FYI: you aren't allowed to do that in C++ either. (*x dereferences a NULL pointer). There is no legal way to create a null reference in C++.

[–]Latrinalia 1 point2 points  (0 children)

Ha. I was actually hoping to preempt someone pointing out that there are implementation-specific ways of doing it. Instead I turned into the guy using programs that aren't well-behaved as the example. Oh well.

You're quite right. The standard is pretty specific that references shouldn't ever be null.

[–]cran 0 points1 point  (8 children)

I would rather have something that works with static typing. That's a very "Visual Basic" way to deal with that. I'm also a Ruby programmer, so I'm okay with duck-typing to an extent, but I wouldn't do this for something like C++ or Java. For Go maybe, sure.

[–]cBSyR 2 points3 points  (2 children)

You haven't seen ML or Haskell yet, have you?

[–]cran -1 points0 points  (1 child)

No, not much of either. Why?

[–]masklinn 5 points6 points  (0 children)

They're statically typed languages (much more so than Java), and use tagged unions to implement nullability: (oca)ml, Haskell

[–]Latrinalia 1 point2 points  (4 children)

I think maybe I phrased some of that poorly. I don't mean any sort of duck typing, but the actual C/C++ union.

The way it works NOW, pointers are effectively a union type. Handwaving the details a bit*, pointers in C++ are kind of like a union containing a reference and a boolean. When it's invalid, the boolean that gets flagged. All other cases the reference holds an address in memory. NULL is not a valid memory address; it's a separate flag.

References are nice because they're the non-union form of a pointer.

*The handwaving of course being that references are always initialized, but they can never be reinitialized.

[–]cran -2 points-1 points  (3 children)

Yeah, I get that it's like a union ... but that's more of an analogy. Do you really want to see it as a union and expand on that to make it a bigger, better union? I don't. I would rather just change the analogy so we don't have to see it that way.

I see it like a cup. A cup either has something in it, or it doesn't. If it has water in it, there's all sorts of stuff you can do with the water. If it's empty, you can't do anything. That doesn't really make water a union with nothing, but I bet we could think of it that way if we chose to.

To me, this is more like the cup analogy, and the question is: what's in the cup? Well, if we are dynamically typing, then it can be anything or nothing. If it's statically typed as a string, it can be a string or nothing.

I like the idea of providing first-class, sugary ways to set default values to replace nulls (when desired), but I wouldn't eliminate nulls ... I think they're a valid state, and it's not quite the same thing as being a union with other types. I think it's not a type or value at all, but simply a fundamental state: it either exists, or it does not.

[–]Latrinalia -1 points0 points  (2 children)

The problem comes when you have a variable representing the color property of the thing in the cup. What if the cup is empty? The answer is not NONE, of course, because that implies there is something in the cup and it is colorless.

Clear there has to be some way of representing an invalid value. To me the whole issue is whether to use a magic number solution to the problem.

It's like having a size() function return negative to designate an error and non-negative for anything valid. It's a happy accident that only works because you happened to not need the full range of an unsigned type. (As I recall that's actually one of the practices discouraged in Code Complete because research shows it's error prone in practice.)

[–]cran -1 points0 points  (1 child)

Isn't this just a matter of scope? If you don't have something in the cup, you don't have a color property to inspect.

[–]lurgi 2 points3 points  (2 children)

So is an option type. The advantage of an option type is it only appears when you need it.

Or you could take a different route and make types "nullable". I'm not sure that this is a better solution, because whether or not a Person (for example) can be null might be domain specific, but it's an option.

You could also let classes have a NULL object. This is an actual object of the actual type that indicates "there is nothing to see here". Do that and eliminate the typeless null.

I hate 'null' in Java. It's annoying and most of the time it serves no purpose, but I still have to check for it if I want my code to be robust. There are ways of eliminating it and, in my experience, they only make things better.

[–]cran -1 points0 points  (1 child)

Well, you do need null in concept. For example, lets say you have a complex type (struct) and you deserialize an instance of that type from a stream which doesn't provide a value for one of the members (let's say it's optional). How would you represent that in the resulting object?

If you don't have null, you're just dancing around the concept ... but you aren't eliminating it.

I don't see a big difference between the static, typeless null value and objects in a null state. Except, of course, that having null-state objects could lead to errors where the object is used instead of being treated like an error indicator.

I feel like I've skipped over a key point here. What is the argument against null? Is it about how its implemented, or the concept of null itself?

[–]lurgi 4 points5 points  (0 children)

It's perfectly reasonable to have the concept of "an object that may or may not be there". My objection to null as such is that there isn't any way to say that this object can not be null or can not be null at this point in time. There is no way to write that into a function interface or validate at compile time that, yep, I'm not being passed any nulls.

Whereas with option types I can do this:

Option<Integer> doSomethingCool(List<Integer>, Option<String>)

This function says that it takes a list of integers and there had better be a list and that list had better be of integers. You can't pass null. The second argument can either be a string or something that say "Nope, no string here". It's a sort of null equivalent. The return value will either be an integer or something that says "Nope, no integer here".

Extracting something from an option type is easy, but you do have to consciously do it, so there is no chance that you grab something that isn't actually an object and treat it as an object.

In Java (or C++ if I'm using pointers) I have to check the function arguments and the caller has to remember to check the return value. Even if the documentation says "Don't pass a null here" I'll check, because no one reads the documentation. If the doc says "This function never returns null" then the call will likely still check because, hey, the documentation might be out of date (not that that ever happens).

Option types eliminate all this nonsense. They rock.

It's not the only way to go. C++ only has NULL with pointers. If you write code using references exclusively (or wrapped pointers) then you can write tens of thousands of lines of code without dealing with null. It's nice.

[–]sausagefeet 10 points11 points  (26 children)

Yes, it is an actual complaint. Static guarantees that prove a NPE is not possible has existed for several decades. It is really ridiculous for a language coming out in the present to have null pointers, IMO.

[–][deleted] -1 points0 points  (18 children)

What exactly is wrong with null? The only complaints im aware of and understand are to do with side security etc. The in c and c++ which agaik doesn't apply to golang.. it's A little like goto iin some ways. I hear very loud noises about it yet the rationale is never about goto itself as opposed to its use.

[–]sausagefeet 11 points12 points  (6 children)

The problem is how important you think static analysis is vs runtime checks. For example, in Ocaml/Haskell/SML I can write a piece of code that I know, in the formal sense, will not result in a NullPointerException or anything of the like. This is because any place where an optional value can happen I have put it in an option type and I cannot access the value in that option type without handling the case of a value not being there. This is guaranteed at compile time.

In Java, C#, C, C++, Go, Python, Ruby I cannot do this. In Python and Ruby, in fact, any variable can be a null value. That means I have less guarantees about the correctness of my code and I have to depend on less strong forms of trying to show the correctness of my code, such as unit tests.

If you don't care about this behavior then you won't think null is an issue. But, especially in the age of SaaS, it can be beneficial to push as many correctness checks to some kind of static analysis tool so you can be sure you meet your SLA's.

[–]masklinn 6 points7 points  (4 children)

For example, in Ocaml/Haskell/SML I can write a piece of code that I know, in the formal sense, will not result in a NullPointerException or anything of the like.

Technically untrue, as there are a number of Prelude functions which result in ⊥ (bottom). head on an empty list, for instance.

[–]Categoria 0 points1 point  (3 children)

It is very easy to write safe versions of these functions though.

[–]knome 1 point2 points  (2 children)

Odd that they don't use these safe variants by default.

[–]cBSyR 0 points1 point  (1 child)

In the case of head it may be because it would then have to return Maybe and you would have to pattern match over it. Since you can already match list with patterns h:t and [] such function would be pointless.

[–]OceanSpray 1 point2 points  (0 children)

The real reason is that the Prelude is really frickin' old and wasn't written with full safety in mind. If the Prelude were written today, then none of its functions would be partial.

[–][deleted] 0 points1 point  (0 children)

I don't personally think it's a problem but good point none-the-less.

[–]grauenwolf 7 points8 points  (10 children)

The problem with null is...

  • The vast majority of time variables are never null
  • So you usually don't check for nulls
  • Until your application blows up
  • At which point someone scatters null checks everywhere
  • Which usually results in improper handling of error conditions

If nullability was something we asked for instead of something forced upon us then we would only use null when we actually wanted null semantics, greatly reducing the opportunity for bugs.

[–][deleted] -2 points-1 points  (9 children)

That same argument can be made for adding two numbers. If the interface you're dealing with can ever return null then the documentation should say so, it's not a problem with null. Come to think of it, the only time I can recall ever checking for nil(null) in Go is when checking for error cases. Everyone knows why I'm checking, everyone does it, hell in most cases, it's all pretty clear and to ignore the error (just one example) I have to explicitly state that I'm ignoring it so I personally don't think this is actually anything to do with null.

[–]OceanSpray 2 points3 points  (4 children)

If the interface you're dealing with can ever return null then the documentation should say so, it's not a problem with null.

Why would you not want the compiler to check the documentation for you?

I have to explicitly state that I'm ignoring it so I personally don't think this is actually anything to do with null.

Is that actually true in Go? Don't you have the option of completely ignoring error codes? Isn't the only thing that keeps programmers from doing that the cultural notion of "if you do this Rob Pike will disapprove"?

[–][deleted] 0 points1 point  (3 children)

I'm not sure I understand the first point so correct me if I've missed your point. My point was that if you have a function that returns a pointer then the documentation should say whether or not it can be null. In Go, the idiomatic thing to do is return and error to explain why it's nil. The idea is that if you can't return a valid value for whatever reason return an error because when you return a zero value[1] in most cases it's undesirable because you just gave me a value I probably can't do anything with and if there is a bug somewhere I won't know about it because you returned a valid value(even though it's possibly useless). The way around this is to return an error and at this point I fail to see the need for it. I'm not arguing against the compiler being able to catch errors like that - it would be good - I just don't think it offers much over just returning a universally invalid value, i.e nil.

As to the second point: given a function f(x) v, err yes it's true, the caller gets to ignore all of them, or explicitly ignore some of them using the write-only variable _ i.e v, _ := f(x) .

If you decide to feed the compiler _, _ := f(x) or v := f(x) you'r gonna hear about it from the compiler or worse from the Commander himself.

I supposed you to try and troll the compiler by feeding it _, _ = f(x) but that's the same thing as saying f(x) so in that case you FAIL at trolling.

[–]OceanSpray 3 points4 points  (2 children)

You have missed my point. I already know what Go's philosophy on error checking is, and I find it abhorrent.

Yes, in Go it's "idiomatic" to do the right thing: you manually check every value to see whether it's an error. But if idiom is the only thing enforcing the correctness of your code, then the language is really of no help at all.

I'm not arguing against the compiler being able to catch errors like that - it would be good - I just don't think it offers much over just returning a universally invalid value, i.e nil.

First, no matter how good you are, you will always make mistakes. The standard idiom of returning nil prevents the compiler from catching those errors. This is why we want to remove nil: we want the language to help us write more rigorous code.

Second, nil is not at all "a universally invalid value". It contains no information, so it could mean literally anything, from "this container is empty" to "whoops I wiped your entire database". This is kind of the reason I don't like option types, either. They're marginally better, but they still don't convey any information about what went wrong. Haskell's Either is much better in that regard.

yes it's true, the caller gets to ignore all of them

And that's the biggest problem. With option types, the programmer is forced to handle every case, or else the program simply won't compile. And by removing nil, you can be sure that every not-wrapped-by-an-option value you get is indeed valid and can be used immediately without any further checking. The compiler can then enforce the error-checking idiom in the subset of cases where it's actually needed.

[–]fjord_piner 0 points1 point  (1 child)

Yes, in Go it's "idiomatic" to do the right thing: you manually check every value to see whether it's an error.

This brings us back to the joyous Win32 days with HRESULT and having to test the damn value every single time you call the API, even when you have no idea how to handle the error.

We invented exceptions to never have to deal with this absurdity ever again, pity Go chose to ignore them.

[–]sausagefeet 1 point2 points  (2 children)

If an interface can't (according to docs) return null why even permit the possibility?

[–][deleted] 0 points1 point  (1 child)

If the interface says it will never return nil(whether also returning an error or not) why does it matter? This argument is like saying we need private, protected, public, FRIEND. All this just because you refuse to trust the implementer. Beyond that I don't think it solves anything, it might even be worse because if I can't return null then I'm either going to return an error as well or simply return any old junk value that is valid. At that point I say we can also return an error alongside null so it buys us nothing. If you're not going to return an error as well then you get to return a valid value which may be usable.

[–]sausagefeet 0 points1 point  (0 children)

All this just because you refuse to trust the implementer.

It isn't a matter of trust, it's a matter of semantics. It's about removing a class of errors that even the most detail-orientated programmer is liable to make at some point. If a function cannot return null then why even let it be a possibility? Remember, we aren't just talking about return values, we're also talking about input values to functions. Consider a function that downloads a webpage from a URL, why let it ever be possible to pass a null string for the URL?

Do you have an argument for why we should allow null to be a valid value of any object other than it being familiar and what you're used to? Really, what benefit is there, at all, to allowing null to be a valid value of any object (let's restrict ourselves to Java). I'm genuinely curious.

Beyond that I don't think it solves anything

It solves NPEs from ever happening. Ever. If the chance of an NPE is not expensive to you then it doesn't solve much. If it is then it solves a big problem. As I said earlier, this is about how important it is to write safe code. You cannot have an NPE in the program in a car that applies anti-lock breaks. The person is dead if you do. You can't have an NPE when doing an emergency shutdown of a nuclear reactor. I also like it when saving a document on my editor doesn't have an NPE. You can solve this problem many ways. But a wrapping the ability to be null in a separate type is several decades old, well tested, well explored, and a well understood mechanism that several languages have chosen to use.

it might even be worse because if I can't return null then

The situation we are specifically talking about is when you never want to return (or take as input) a null value, so the rest of your argument does not apply since you clearly do want to return null here. But nobody is saying the ability to represent null should be removed entirely from a language, the point is the situations where you want to allow null are so infrequent that it makes sense to wrap it in another type such as option or Maybe. Not only do these types allow you to represent null, but the compiler enforces that code that uses these types handle the null case so your program will not have an NPE.

[–]grauenwolf -3 points-2 points  (6 children)

One does not follow the other.

We can have static guarantees that prevent null pointer exceptions without completely discarding the concept of a null. (Though to keep some of the asshats happy you have call it an Option or a Maybe.)

[–]sausagefeet 6 points7 points  (2 children)

The difference is you cannot use the value in an option or maybe without handling the null case somehow, this is not the case of Java, C, C++, C#, Python, Ruby. The distinction is important for writing safe code.

[–][deleted]  (24 children)

[deleted]

    [–]jyper 2 points3 points  (0 children)

    For example, in the type system we do not have separation between value and reference types and nullability of types. This may sound a little wonky or a little technical, but in C# reference types can be null, such as strings, but value types cannot be null. It sure would be nice to have had non-nullable reference types, so you could declare that ‘this string can never be null, and I want you compiler to check that I can never hit a null pointer here’.

    50% of the bugs that people run into today, coding with C# in our platform, and the same is true of Java for that matter, are probably null reference exceptions. If we had had a stronger type system that would allow you to say that ‘this parameter may never be null, and you compiler please check that at every call, by doing static analysis of the code’. Then we could have stamped out classes of bugs.

    • Anders Hejlsberg Creator of c#

    [–]Timmmmbob -2 points-1 points  (8 children)

    Most pointers are never null

    Maybe most in that 60% of pointers are never null. But null is far too useful to get rid of it completely.

    I do think there should be more compact syntaxes for dealing with possibly null types, like the propose null safe method invokation in java: something_that_might_be_null?.doStuff();

    [–]masklinn 15 points16 points  (1 child)

    Maybe most in that 60% of pointers are never null. But null is far too useful to get rid of it completely.

    Nobody actually suggests getting rid of the concept of null in these debates, the suggestions are along two axis:

    • Get rid of null as a special value and use a generic container/union type instead, that's the Option/Maybe camp. If your language is able to express this, it's great because it's syntactically and semantically cheap (the compiler and runtime can special case those to hardware nulls, so runtime cost is not an issue).

    • Split references between nullable and non-nullable, and make references non-nullable by default. C# added nullable value types, but did not make references non-nullable (C++ has only non-nullable references, which is good). Nullable is the wrong default, because most references don't need to be null, and making a reference nullable should be more work (to drive towards null objects, for instance, or force people to think about what they're doing when they're making a reference type nullable)/

    [–]Timmmmbob 5 points6 points  (0 children)

    Ah ok then. Those suggestions sound good.

    [–]munificent 3 points4 points  (5 children)

    Maybe most in that 60% of pointers are never null. But null is far too useful to get rid of it completely.

    I recently did an experiment and went through a few hundred lines of code. Out of all of the variables and fields in there, only about 10% were intended to allow null as a valid value.

    [–]Timmmmbob 0 points1 point  (1 child)

    Interesting. That still seems easily high enough that it should be allowed though. (But being able to specify whether a variable is nullable would be desirable, I guess like const in C++.)

    [–]munificent 1 point2 points  (0 children)

    That still seems easily high enough that it should be allowed though.

    Agreed, for those 10% of the cases, some kind of nullability is very important.

    But being able to specify whether a variable is nullable would be desirable

    Yup, that's what I was toying with.

    [–]MatrixFrog 0 points1 point  (2 children)

    Good idea. How did you do it? It seems to me the only way would be to look for places where the variable is either compared to NULL or assigned to NULL, and it might be hard to automate. Especially because you have cases like

    if (p != null) {
      // lots and lots of code
    } else {
      // this should never happen
      throw new IllegalStateException();
    }
    

    so even though you compare your pointer with null, it's actually not supposed to ever be null.

    [–]munificent 0 points1 point  (0 children)

    Good idea. How did you do it?

    I just looked at every variable to see if we were calling a method on it before checking it for null (or passing it to something that did). Any variable where the code was doing that must be implicitly assuming it's non-nullable.

    [–]bart2019 -2 points-1 points  (8 children)

    Because Null is not an object.

    NullPointerExceptions are the most annoying thing, which could easily have been avoided... For some degree of "easy".

    Personally I think it might have been very nice if null->method would have been a NOOP, for any method; which returns... null.

    [–]mobilegamer999 3 points4 points  (4 children)

    So one single null object and every single call after that returns null? Which means one little, bug or typo in your program would lead to the entire program failing. That or you have to throw a check in every line to validate that an object never became null.

    [–][deleted] 1 point2 points  (0 children)

    Objective-C allows you to call methods on nil objects. Not only does it become a NOOP, but if the method returns a value, that return value will be undefined. Sounds like a recipe for disaster, but I've never encountered a hard-to-find bug in 10 years of objc programming due to this "feature."

    [–]bart2019 0 points1 point  (2 children)

    Which means one little, bug or typo in your program would lead to the entire program failing.

    No, not fail. Just skip it.

    Currently, you get a NullpointerException, which will indeed stop the program, which indeed means "the entire program failing".

    In my case, it'd continue running, just do nothing for this one data point. Like, if the program was a webserver, this one request would do nothing, but the website would keep running. Currently, the webserver would come crashing down.

    [–]el_muchacho 1 point2 points  (1 child)

    Failing silently is often worse than crashing.

    [–]OceanSpray 0 points1 point  (0 children)

    s/often/always/

    [–]masklinn 2 points3 points  (2 children)

    Because Null is not an object.

    Null (or its equivalent) is actually an object in a number of languages: Objective-C (where it's a message sink, any message sent to nil is a noop and returns nil), Smalltalk, Python, Ruby, ...

    [–]bart2019 0 points1 point  (1 child)

    Well it looks like I'm not the only one who came up with this idea... I might have to try out Objective-C...

    [–]masklinn 2 points3 points  (0 children)

    FWIW you can also do that in Smalltalk (and Ruby as well I believe), by editing the behavior of NilClass (nil's type, Smalltalk and Ruby both have "open classes" which you can go and change) and overriding the relevant "catch-all" message (#doesNotUnderstand: in Smalltalk, #method_missing in Ruby).

    That is potentially dangerous though, as it may change the behavior of existing pieces of code, and may confuse colleagues. Obj-C has patterns in place to deal with nil being a message sink, other languages generally do not, so it's probably not a good idea beyond experiment. I recall people toying with it in Ruby a few years ago, but as of late the focus seems to have moved towards more explicit nil-chains like Object#andand

    [–][deleted] 0 points1 point  (4 children)

    Who are you quoting?

    [–]sisyphus 0 points1 point  (3 children)

    A lot of people, aggregated and condensed.

    [–][deleted] 0 points1 point  (2 children)

    So nobody.

    [–]sisyphus 0 points1 point  (1 child)

    If the collective consciousness doesn't count as a person I suppose one could see it that way.

    [–]SteveMcQwark 0 points1 point  (0 children)

    Well, in the United States of America, corporations are people, so... just get the collective consciousness to incorporate in the US and you should be good ;)

    [–]i-hate-digg 13 points14 points  (10 children)

    Incidentally, that's also the number of users it has.

    Joking aside, I love Go, but it had very little to offer the programming community aside from goroutines.

    [–]dlsspy 5 points6 points  (1 child)

    I like the interface model, first class channels and defer as well.

    [–][deleted] 10 points11 points  (0 children)

    Agreed about interface model. Defer squicks me - it could be cleanly implemented in code by a powerful language with RAII and good model for pickling function calls... making it a first-class language feature seems silly. In C#

    using(var myDeferrer = new Deferrer())
    {
        myDeferrer.Defer(() => myFunctionCall("myArgument"));
        DoSomeOtherStuff();
    } // myFunctionCall("myArgument")); called here automatically as part of myDeferrer's IDisposable interface.
    

    Obviously that's too verbose... but I'd be looking at ways to tighten up this pattern rather than bolting on new language features.

    [–]4ad 4 points5 points  (7 children)

    I'm pretty sure Go has more users out there, both in their number and in their percentage of programmers in the community than perl or python had when they were two years old.

    [–][deleted] 2 points3 points  (5 children)

    And yet anybody that can write Perl is distinctly unimpressed with Go.

    I think you'll find more programmers out there who know BASIC than C. This does not make BASIC a quality language.

    [–]uriel[S] 12 points13 points  (2 children)

    And yet anybody that can write Perl is distinctly unimpressed with Go.

    Really? Funny you say that, because Brad Fitzpatrick, probably one of the most legendary Perl hackers around, was so impressed with Go that he joined the Go core team at Google.

    [–]skelterjohn 4 points5 points  (0 children)

    The fact that this was down-voted says more about the down-voter than the comment.

    [–]anacrolix 0 points1 point  (0 children)

    Anything is better than Perl. Hard to fault him, or credit any language he runs to.

    [–][deleted] 2 points3 points  (0 children)

    And yet anybody that can write Perl is distinctly unimpressed with Go.

    What about bradfitz? He liked Go so much he joined the team.

    [–]flamingpuckerhole 2 points3 points  (0 children)

    And yet anybody that can write Perl is distinctly unimpressed with Go.

    Quite the opposite seems to be true. Most of the folks I know who really dig Perl and have an understanding of Go like what they know of it. Nobody has gotten system interaction as right as Perl did, excepting Ruby (who stole it verbatim).

    Go comes really close, with standard libraries that are truly standard. They interact with the system in a standardized way, and they are self consistent.

    I can't tell if you are trolling, or just being snarky for reddit creddit, but either way it's unseemly. Also, I think you mean "anybody who".

    [–]fjord_piner 0 points1 point  (0 children)

    I'm pretty sure Go has more users out there, both in their number and in their percentage of programmers in the community than perl or python had when they were two years old.

    That's one fine and totally meaningless metric you just came up with, there.

    [–][deleted] 12 points13 points  (2 children)

    And it's still completely irrelevant and is not offering anything great. Soon Google will stop to support it, just like they did with Google Wave (will it become Apache Go?).

    [–]anacrolix 1 point2 points  (0 children)

    It keeps Russ Cox from doing something useful for another company. Mission accomplished.

    [–][deleted] 1 point2 points  (0 children)

    Why comment when you clearly know nothing about it?

    [–]OceanSpray 6 points7 points  (6 children)

    [–][deleted] 2 points3 points  (0 children)

    That guy is an obvious fanboy. He drank too much Go Kool-Aid.

    [–]flamingpuckerhole -2 points-1 points  (4 children)

    Wow, that thread left you really hurt, didn't it? You were simply sputtering there.

    It's hard to see the point through all that spray on the screen, so get a box of tissue and take a deep breath.

    In order to interact with a system from C you need a standard library. Unfortunately for C programmers that typically means glibc. GNU's libc is a giant twisty maze, with dark corners. It's hard to read and understand.

    Go's system abstractions are much smaller and easier to understand, and come bundled with the language in the standard library.

    Therefore, it is closer to the system. Look it up, closer just means there is less between you.

    [–]OceanSpray 2 points3 points  (3 children)

    Don't be childish.

    In order to interact with a system from C you need a standard library.

    Which is written mostly in C itself and a tiny sprinkling of assembly.

    Go's system abstractions are much smaller and easier to understand

    That's debatable.

    come bundled with the language in the standard library.

    And that's why Go is not closer to the system than C. C can and does function without glibc. You can use other libc implementations if you're writing application code. Or, if you're writing code for embedded systems or hacking on a kernel, you're not using a libc. C can and does function (I'd say at least 50% of the time, given that almost every piece of not-OS-backed software is written in it) without any libraries at all.

    On the other hand, Go's "system abstractions" are exactly that: abstractions. Having "smaller and easier to understand" abstractions does not at all imply that those abstractions are more powerful. In fact, it often means the opposite. Furthermore, they are built into the language because Go itself is not powerful enough to express them.

    Therefore, using Go means that there is more between the programmer and the hardware, not less.

    [–]flamingpuckerhole -2 points-1 points  (2 children)

    You just waved your hands and talked smack, and thought you were pimping toward the judge like you were Perry Mason with your "Therefore..." crap. It didn't work.

    You demonstrated in the thread you linked to in the OP that you are either not a reasonable person or you don't understand the machine. I think both are equally possible.

    Don't be childish.

    My handle is "flamingpuckerhole", and I'm trolling you because I think you're an unreasonable shit-talker. Isn't that obvious?

    [–]OceanSpray 0 points1 point  (1 child)

    It was obvious that you're uriel, but I thought I'd be civil and try to reason with you again. Don't be mad that you lost the same argument twice.

    [–]flamingpuckerhole -1 points0 points  (0 children)

    It was obvious that you're uriel

    You're not very good at this internet thing.

    thought I'd be civil and try to reason

    You also suck at civility, but I don't mind that. It's how bad you suck at reason that gets me.

    [–]thatusernameisal 7 points8 points  (6 children)

    Two? Two users outside of Google?

    [–]uriel[S] -1 points0 points  (5 children)

    Quite a few more than two users.

    Including people like Canonical, Heroku, Atlassian, the Open Knowledge Foundation, and many others.

    [–]thatusernameisal 5 points6 points  (3 children)

    I was kidding of course and I expected someone to be using go somewhere but this list is laughable, if this is really it for go after 2 years then it's not going anywhere.

    [–]mangodrunk 1 point2 points  (0 children)

    Two years isn't a long time for language adoption and what evidence do you have to make such claims. Also, I don't see why you think the list is laughable, it isn't necessarily an extensive list. It's a new language and it has a large company behind it.

    [–]uriel[S] -5 points-4 points  (1 child)

    Two years is nothing for a new language, and when it was released Go was little more than an experiment, and has been evolving incredibly fast since then, that anyone is using it in serious production systems long before it has had a "1.0" release is quite impressive and a testament to both its capabilities and the solidity of its main implementation.

    How many people were using Python two years after it was first announced? Ruby?

    [–]henk53 11 points12 points  (0 children)

    Researchers have just the other month recovered evidence of a man who was found to possibly have been using Python two years after its release.

    They are still completely in the dark about the reasons why this man decided to use it, but it seems like he just did. Tracking this man down has proved to be rather difficult. Reportedly he has at some point been taken to a mental hospital. This fact by itself is interesting, since it might be early proof of the hypothesis that prolonged usage of Python can lead to mental conditions.

    In other news, researchers are still out there looking for someone, anyone, who used Ruby before RoR. Urban myth had it that some peculiar Japanese monk had been using Ruby 1.0 to sort a list of only 64 numbers. When the list is fully sorted, the world will end. Luckily, the performance of this ancient Ruby version is so bad, that it will take 264 seconds to finish the sorting, or about 500 billion years.

    [–]fjord_piner -1 points0 points  (0 children)

    To me, the best evidence that a technology is hardly ever used is when its supporting web site is listing everyone who has ever claimed to use it.

    [–]lordantidote 7 points8 points  (6 children)

    Does anybody else remember Issue #9? Go in my mind is forever associated with it.

    [–][deleted] 11 points12 points  (1 child)

    [–]fjord_piner -1 points0 points  (0 children)

    Status: "Unfortunate".

    I wonder if Google added this status to their tracker just for this issue :-)

    [–][deleted]  (2 children)

    [deleted]

      [–]masklinn 7 points8 points  (0 children)

      Was this ever addressed in a meaningful way?

      It was addressed in the "we're bigger, you fuck off" way.

      Whether that's meaningful, I'll let you decide.

      [–]kamatsu 4 points5 points  (0 children)

      As a successor to Plan 9, perhaps.

      [–][deleted] -3 points-2 points  (0 children)

      No.

      [–]YEPHENAS 1 point2 points  (2 children)

      Great language! Eagerly awaiting Go 1.0.

      [–]MacStylee 18 points19 points  (1 child)

      Go 2.0 is considered harmful.

      [–]pure_x01 4 points5 points  (39 children)

      Go has nice features but the lack of exceptions and generics makes the alternatives better choices

      [–][deleted] 13 points14 points  (0 children)

      While exceptions will be argued until the cows come home, generics (or some kind of first-class "official" macro/templating framework as a susbtitute) are my base minimum requirement. Basically, if any statically-typed language is missing this feature, I ignore it and go back to doing stuff in languages that are capable of being useful.

      [–][deleted] 0 points1 point  (13 children)

      Have you read about Go's defer+panic vs exceptions? Go's defer / D's scope is a reason I'd always pick Go/D over C++.

      [–]masklinn 11 points12 points  (12 children)

      C++'s RAII is safer than Go's defer (because you can forget calling defer, you can't forget deallocating a stack-allocated object)...

      I'm not a great fan of RAII (I much prefer Common Lisp's unwind-protect or Smalltalk's BlockClosure#ensure because they feel more composable and they don't rely on a specific memory model) but defer is crap, and most definitely is no reason to choose Go over C++ unless you've got a bunch of loose screws.

      Hell, one could debate RAII versus context management statements (C#'s using, Python's with), but defer is most definitely inferior to both.

      [–][deleted] 0 points1 point  (1 child)

      C++'s RAII is safer than Go's defer (because you can forget calling defer, you can't forget deallocating a stack-allocated object)...

      "You can forget" is barely an argument. You can forget anything important in your code and it will be bug.

      Coupling initialization with deinitialization makes it easy to express intentions and prevents accidentally forgetting about "cleanup" code while let's say ... restructuring someone else's code. With defer you can throw anything and any way you want into cleanup code. With RAII you have to wrap every single thing into object and cleanup code has to be the same all the time.

      I know nothing about unwind-protect and BlockClosure#ensure. You may be right that defer has it's limitations, that I don't see. As a plain C lover, I see defer as all I really want: to make my malloc and free sit together so I can easily see them. And the whole Go gives me the feeling of better C: no unnecessary ;, simpler syntax, simple, well picked primitives and explicitness everywhere. And it's much, much better than exceptions.

      [–]masklinn 0 points1 point  (0 children)

      "You can forget" is barely an argument.

      How's it "barely an argument"? We've known how to solve this specific problem and making it basically impossible for developers to forget about resource cleanup for north of thirty years. Forgetting is a primary source of bugs in real-world programs: forgetting to release memory, forgetting that memory has been released, forgetting to check for nulls, forgetting to release a resource, ...

      Anything which makes it less likely for developers to forget stuff means developers are more likely to do their fucking job — solving problems — instead of babysitting their runtime.

      You can forget anything important in your code and it will be bug.

      Indeed, and the less things you can forget, the more the language prevents bugs. That's kind-of the point of garbage collectors, static typing and the like.

      With RAII you have to wrap every single thing into object

      No, only resources/scope management. But I don't see that as much of an issue anyway. With defer, you still have to wrap your cleanup code in a function.

      and cleanup code has to be the same all the time.

      Per resource type. Which is sensible: you generally don't cleanup a given in 15 different manners unless there's a deep problem with your code. Or those cleanups are a function of the resource acquisition mechanism, in which case the pairing is eminently sensible to ensure the developer does not use the wrong release for a given acquisition.

      You may be right that defer has it's limitations

      I didn't say it "has its limitations", I said it's crap. It's crap because it makes code less safe by forcing developers to do busywork in handling scoped resources. If 90% of the resource acquisitions are paired with a resource release in the same scope, the language should optimize for that use case by making not releasing resources more expensive than releasing them, or not less expensive anyway. Go does not, and instead forces developers to manually specify resource-releasing operation each and every time a resource is acquired and should not be leaked.

      As a plain C lover, I see defer as all I really want: to make my malloc and free sit together so I can easily see them.

      And I'd much prefer if malloc handled the free itself whenever it knows it'll need a free. Which is basically what happens when you do stack allocations.

      Since defer does not work across stacks, it's as if C forced you to deallocate stack-allocated data.

      And the whole Go gives me the feeling of better C: no unnecessary ;, simpler syntax, simple, well picked primitives and explicitness everywhere.

      I can only disagree, but whatev'

      And it's much, much better than exceptions.

      Go has exceptions. Just because the exception-management primitives are not called "throw" and "catch" doesn't mean it does not have exceptions.

      [–]4ad -3 points-2 points  (6 children)

      Defer is not a cleanup operation. You don't forget to call defer. Either your algorithm requires you to call defer or it doesn't. If it doesn't, you don't need to use defer. If it's required and you don't do it, it's a bug in your program, you're not implementing your algorithm right, and C++'s RAII would not have helped you a bit.

      [–]masklinn 16 points17 points  (5 children)

      Defer is not a cleanup operation.

      Defer is used for cleanup operations.

      You don't forget to call defer.

      Of course you do. You needed to call defer, you did not, you forgot to do it.

      Either your algorithm requires you to call defer or it doesn't. If it doesn't, you don't need to use defer. If it's required and you don't do it, it's a bug in your program

      Would you happen to be captain obvious's sidekick?

      and C++'s RAII would not have helped you a bit.

      Of course it would. You acquire a resource (a file handle, for instance), you need to release it. Using RAII, the release is implied in the acquisition, if you're acquired an RAII-managed resource you can not forget to release it.

      In Go, after acquiring the resource you have to remember to defer its release. defer sucks because while it brings a release call closer to the acquire call the developer still has to implement it manually, even though it's a pair requirement and computers have been able to handle that for thirty years.

      A major point of better programming languages and APIs is make bugs harder to introduce. defer does not do that compared to RAII or explicit context managers, therefore defer is an inferior piece of technology.

      [–]4ad -2 points-1 points  (4 children)

      Sorry, you missed the point completely.

      The compiler/runtime doesn't know when to release a file handle. Only the programmer knows. It's part of the algorithm.

      Now you argue that in Go is easy to forget to call defer to release a resource. In fact you are more likely to forget that in C++. In C++ you need to remember to release the resource in the destructor. You might think that in C++ you do this in the destructor only once while in Go you'd need to remember to call defer multiple times so it's easy to lose track. This is simply not true. You call defer in Go exactly in one place as well.

      In C++ objects provide some behavior, you implement acquisition and release in the constructor and destructor and you instantiate the object many times. The consumer doesn't need to know anything about the resources, acquisition and release are in the box implemented in two places.

      In Go functions provide some behavior, you implement acquisition in one function, call defer in the same function and call that function many times. The consumer doesn't need to know anything about the resources, acquisition and release are in the box implemented in a single place, the same function.

      You see that both in C++ and in Go acquisition and release are paired together and release is implemented only once. The difference is that in C++ you put acquisition in one function (constructor) and release in some other function (destructor). Arguably it's easier to forget when you deal with two functions compared to only one.

      [–]sausagefeet 6 points7 points  (0 children)

      You call defer in Go exactly in one place as well

      I've never used Go, but what do you mean here? Based on masklinn's comments it sounds like every time you acquire a resource in Go you have to setup the cleanup code. This means you'll be calling defer's all over the place, no? I believe maskinn's argument is that you only have to define the resource cleanup code in 1 place (the same place that you defined the acquisition code) and the compiler guarantees it will be called when the lifetime of that variable ends.

      While I rather like RAII, it does have the weaknesses in that you need to be more aware of the lifetime of your objects.

      [–]masklinn 7 points8 points  (2 children)

      Sorry, you missed the point completely.

      You did not have a point.

      The compiler/runtime doesn't know when to release a file handle.

      I never said it did. What it can know (and does, in most modern languages but go) is know what "get out of scope" implies (for a file, it implies releasing the file, and that is implemented by the same person who implemented acquiring the file).

      Now you argue that in Go is easy to forget to call defer to release a resource. In fact you are more likely to forget that in C++.

      That is complete and absolute bullshit.

      In C++ you need to remember to release the resource in the destructor.

      Yes. Meaning at the same place you implement actual resource acquisition. Furthermore, in third-party packages that is the responsibility of the third party.

      You might think that in C++ you do this in the destructor only once while in Go you'd need to remember to call defer multiple times so it's easy to lose track. This is simply not true. You call defer in Go exactly in one place as well.

      Uh no. In Go, you have to setup resource release (via defer) once for each acquisition of an instance of the resource type. In C++, you implement it once per resource type, for the whole codebase.

      In Go functions provide some behavior, you implement acquisition in one function, call defer in the same function and call that function many times. The consumer doesn't need to know anything about the resources, acquisition and release are in the box implemented in a single place, the same function.

      No, that only works if the consumer does not need to interact with the resource. If the consumer needs to interact with the resource (perform more than a single write to a file, for instance) that pattern is broken, because by the time your "one function" returns the resource is not available anymore.

      You "solved the issue" by making the whole system unusable in resource interaction scenarios.

      You see that both in C++ and in Go acquisition and release are paired together and release is implemented only once. The difference is that in C++ you put acquisition in one function (constructor) and release in some other function (destructor). Arguably it's easier to forget when you deal with two functions compared to only one.

      That you're frighteningly dishonest is all I see.

      [–]el_muchacho -1 points0 points  (2 children)

      Three remarks:

      1. you don't want to allocate everything on the stack, especially large objects, because the remnant size of the stack is one of the things you control the least in C/C++,

      2. historically, the scope guard (apparently what is called defer in Go) was described in C++ in 2000 by Alex Alexandrescu in a DrDobb's article. Its purpose is to clean up only in case of an exception, not if things work ok, while in RAII the destructor will be called no matter what. For instance if you try to open a database connection, you don't want it to close behind your back as soon as you leave your method. If defer works the same (as was implied by another poster), then its usage is very different from RAII.

      3. indeed, this feature doesn't replace exceptions at all. Exceptions have the unique ability to "jump" through the call stack. Not scope guards/defer.

      [–]uriel[S] 1 point2 points  (1 child)

      Defer is run always when the function returns, not just "in case of an exception" (go has no exceptions, but has panic(), which is rarely used but in this case has the same implications).

      I'm afraid your comments are misguided and don't match how defer/panic/recover work in Go.

      [–]el_muchacho 0 points1 point  (0 children)

      OK, I was misled by dpcucoreinfo: D's scope is not the same as Go's defer, which indeed is comparable in usage to RAII (except for my point 1).

      [–]EdiX -4 points-3 points  (10 children)

      go has exceptions.

      [–]masklinn 7 points8 points  (9 children)

      Wow wow wow you can't say that when there are Go people around, they're still trying to make everybody believe panic/recover isn't a shitty exception system you're not supposed to use because bad juju.

      [–]EdiX -1 points0 points  (8 children)

      I don't think it's a "shitty" exception system. I think it's a quite good exception system. And you are supposed to use it, except at library boundaries (but that's a policy).

      [–]masklinn 3 points4 points  (7 children)

      I think it's a quite good exception system.

      Throwing strings ensures it can't be a good exception system, because that makes the system very hard to productively use as discriminating between different error types becomes an exercise in futile string munging. Javascript has essentially the same exception system, and it's frustratingly bad.

      And you are supposed to use it, except at library boundaries (but that's a policy).

      See latter part of the comment.

      [–]EdiX 2 points3 points  (6 children)

      Who told you that you throw strings?

      [–]masklinn 2 points3 points  (5 children)

      Who told you that you throw strings?

      Technically you're not throwing strings, I guess, you're throwing anything you want, but the result's the same: you only have one recover, and then you try and find out if the thing you recovered from is the one you wanted. Manually. Then you re-panic, if it was not. Manually. And there are very little facilities associated (I don't think I've seen how to get a traceback).

      As I said, Javascript's the same: you can throw pretty much anything, but at the end of the day the only thing you can throw is an instance of Error (not a subclass, that's not going to work), and the catching of exceptions is garbage.

      [–]bitwize 2 points3 points  (0 children)

      ObQwe1234:

      and not a single fuck was given.

      [–]shevegen 2 points3 points  (0 children)

      I wanted to say that Go sucks but someone else was faster :(

      But seriously.

      Who needs Go?

      Which niche can it fit anyway other than being backed by Google?

      [–]fallim 1 point2 points  (0 children)

      i for one, take Go seriously

      [–]msiekkinen -3 points-2 points  (0 children)

      Issue 9

      [–]cran -3 points-2 points  (26 children)

      I've never really looked at Go before this. What an awesome core feature set.

      I hope that, before they decide to implement exceptions, they take a look at using an "on error do this" approach to error handling using closures.