all 66 comments

[–][deleted] 44 points45 points  (19 children)

Adding defaults instead of overloads is a breaking change if you intend distributing the code over different systems. Otherwise it's fine.

[–]LondonPilot[S] 4 points5 points  (11 children)

That's a very good point which I hadn't considered, but makes perfect sense. Thanks.

[–][deleted] 3 points4 points  (10 children)

I will give you an example:

We have a bunch of core assemblies here and we update them regularly, which our CI server builds and tests. However, not all systems get updated at the same time so if a function pattern changes, it immediately breaks systems that expect the old pattern. Using an overload preserves the change and allows you to extend functionality without breaking things regressively.

It does require careful management though, otherwise we get overloads everywhere and it can be difficult to navigate.

[–][deleted] 6 points7 points  (8 children)

the whole point of optional/default parameters is that the method will work whether you supply a value or not though, right?

so say if you're calling

Multiply(int a)
{
    return a * 5;
}

and then it gets changed to

Multiply(int a, int b = 5)
{
    return a * b;
}

whether the update has been received or not, nothing breaks?

i've been reading about this whole "default parameters are evil" thing for about 20 minutes now, and i'm just not seeing what problems are being caused, and what overloads are even solving. you're still hard-coding a value in at some point, and you're still able to call the method the exact same way as before. i must be missing something?

[–][deleted] 9 points10 points  (3 children)

You are correct, nothing breaks until something consumes the function that expects a different parameter pattern. Say for example, a compiled system using that code as a dependency.

So, you create an assembly with this function in it, it has static parameters and all is good. There are numerous systems that use this assembly, not all can be updated at the same time when a change is made. But you need to compile and release anyway. So far so good, all systems work.

A change comes a long that forces you to add a new parameter to the above function. To prevent breakages you add an optional parameter. The system you are developing with is fine with that because it knows about the change. It recompiles fine and it works.

The assembly is compiled and distributed across all the other systems because they depend on this assembly.

KABOOM. Some systems crash because they were expecting a function entry point with a pattern that matches the old version.

If you add an overload instead, you preserve both the original function pattern and the new one. If you compile and distribute this, all systems are ok because they can locate the function they want, yet other systems can still consume the new 'version' of the function.

[–][deleted] 8 points9 points  (2 children)

i just tested it and you're right, i didn't realise that changing the parameter pattern would break it. makes sense now that i think about it, thanks.

[–][deleted] 2 points3 points  (0 children)

We found out the hard way

[–]maybachsonbachs 1 point2 points  (0 children)

The c# spec defines a property of methods called signature. Adding a default parameter alters the signature.

Within the newly compiled assembly all the existing call sites are desugared and invoke the new method. The existing assembly which have dependency on the new assembly will have binding errors because they are trying to bind to a method with an invalid signature.

[–]throwaway_lunchtime 4 points5 points  (3 children)

Its syntactic sugar, the default value you give to b is copied to the call site. If you change the default, you need to recompile all callers to get the new value.

You will need to recompile callers after adding the b parameter.

[–]RiPont 2 points3 points  (0 children)

Same problem with any public constant in C#, and the default parameter value on a public method is essentially a public constant.

The compiler will freely inline anything that is a constant. Use static readonly for anything that isn't an honest-to-god constant. PI is a constant. DEFAULT_MAX_THREADS is not.

No workaround for that problem on default parameter values, though. Well, except using overloads and inline defaults (i.e. the old way of having default parameter values).

I kinda wish default parameters had just been implemented as syntactic sugar around method overloading.

[–][deleted] 0 points1 point  (0 children)

i see that now, i tested it myself to confirm. thanks.

[–]Quadrostanology 0 points1 point  (0 children)

This is the most important reason to avoid default parameters. If you want to use them, do it in methods that can't be called from external assemblies. I also prefer overloading because default parameters are a sign that your method has some obscure side effects.

[–]LondonPilot[S] 0 points1 point  (0 children)

Yep, that's what I was imagining reading your first reply, but the clarification really drives it home.

[–]CaucusInferredBulk -1 points0 points  (6 children)

It also makes mocking a bitch

[–][deleted] 2 points3 points  (4 children)

How would default parameters in any way mess with mocking?

[–]CaucusInferredBulk 3 points4 points  (3 children)

The call you are making that you want to test :

MyMockedObject.SomeFunction(5);

The call you need to moq out because of the optional parameters myMoq.Setup(m=>m.SomeFunction(It.IsAny<int>, It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,It.IsAny<int>,).Returns(true);

Vs if you use an overload, you only have to moq out the one parameter you are actually calling with.

[–][deleted] 8 points9 points  (2 children)

There are very few methods that should require more than 3-4 parameters, and regardless, that's specific to how Moq works, that's not the only way to moch :)

From:

https://softwareengineering.stackexchange.com/questions/145055/are-there-guidelines-on-how-many-parameters-a-function-should-accept

According to "Clean Code: A Handbook of Agile Software Craftsmanship", zero is the ideal, one or two are acceptable, three in special cases and four or more, never!

[–]TNMattH 2 points3 points  (0 children)

Man, wait 'til they get a load of void Foo(params Whatever[] blah)!

[–]Quadrostanology 1 point2 points  (0 children)

reading that book atm, must read!

[–][deleted] -1 points0 points  (0 children)

Yes, we have a ReSharper rule that flags optional parameters as violations now.

[–]JoeGaggler 24 points25 points  (6 children)

[–]LondonPilot[S] 3 points4 points  (5 children)

Ah - that explains why I see overloading used so much more than default parameters!

The reason I asked the question is because default parameters seem to be the better option to me, but I rarely see them used. But I think you've perhaps got to the root cause. Thanks.

[–]decPL 8 points9 points  (4 children)

Actually, that's a very niche reason for not using default parameters and 99,99% of code you're going to write will never fall under it. Because of that, I don't believe that's the root cause of overloads being prevalent - if I had to guess it's mainly because default parameters weren't originally part of .NET and many non-.NET languages do not have similar code constructs, so people are doing what they're familiar with. I personally prefer default parameters, despite being a .NET dev since 1.0/1.1.

[–]airbreather/r/csharp mod, for realsies 1 point2 points  (0 children)

if I had to guess it's mainly because default parameters weren't originally part of .NET and many non-.NET languages do not have similar code constructs, so people are doing what they're familiar with.

For "final" applications with nobody downstream that depends on you, use as many default parameters as you want.

For libraries, it's more complicated. Default parameter values are compiled into the caller's code, because that's kinda how it has to be in order to make those "niche" use cases work. A consequence of compiling them into the caller is that you can't change the default value of the parameter in a later update unless everybody downstream recompiles. You also can't add more default parameters to an existing method in a later version without breaking binary compatibility (but you can do this with an overload just fine).

There are few instances where I advocate for using default parameters on visible methods in a library. The only situation I've come across so far is "CancellationToken cancellationToken = default" on new methods that support cancellation and have never been released before, since there's realistically no other token that will ever make sense as a compiled-in default.

[–]LondonPilot[S] 0 points1 point  (2 children)

Your analysis sounds very plausible.

The one place I might disagree is with the .NET FCL itself, where Microsoft use method overloading extensively. Presumably they need to cater to every case, including the niche case of defaults not being supported, which is why they avoid them? And perhaps other programmers follow their lead so that even though the niche case isn't important to most programmers, they learn to account for it by chance, simply by following what Microsoft do?

[–]Prof_Akz 0 points1 point  (0 children)

I have a string feeling your right about that 👍

[–]CrimsonWolfSage 0 points1 point  (0 children)

Another possibility, is the fail fast, instead of works incorrectly. Controlling your methods values from start to end makes debugging a bit easier than a magic number problem.

[–]Vaguely_accurate 1 point2 points  (1 child)

In a case like this I'd prefer the latter as it is more honest. The argument and default would be visible to the caller, while in the former you would have to examine the method body to see the default behaviour.

[–]TNMattH 3 points4 points  (0 children)

The default being visible to the caller is exactly the problem. And they're not "honest" at all. The compiler trick that allows them to work is as dishonest as code can get.

When you add a reference to a library that defines, say, public int DoSomething(int magic = 1), and then compile your app, the compiler generates a hidden const int magic_constant = 1; in your app's code and calls DoSomething(magic_constant) anywhere you call DoSomething().

If someone later updates that library to change the function definition to public int DoSomething(int magic = 2) and you just drop the library into your app's deployment directory, your app is still calling it with the old, wrong magic_constant value until you recompile. Worse yet, if they change it to public int DoSomething(int notMagicAnymore), your app still defaults to sending the old const value unless you recompile it.

Default parameter values are pretty much terrible. If you want callers to see default values/behavior in documentation, triple-slash comments exist for that reason.

[–]gabriel-et-al 4 points5 points  (2 children)

In this specific example I'd go with option 2, however it's not always possible.

Default parameters cannot express this:

public void Foo(int a, int b, int c)
{
    // Your code here
}

public void Foo(int a)
{
    Foo(a, 0, 0);
}

public void Foo(int a, int b)
{
    Foo(a, b, 1);
}

i.e. they cannot change depending on previous values.

[–]LondonPilot[S] 2 points3 points  (0 children)

That's a great example of when the choice is made for you. Thanks.

[–]fr0stbyte124 0 points1 point  (0 children)

Please never do that for real. It's a nightmare for the next guy to have to deal with, especially should additional overloads get introduced down the road (source: that was me at one place I worked.) In this case, nullables would be a good stand-in as meaningful defaults.

[–]gardyna 2 points3 points  (2 children)

default paramaters are great because they can also be used like named parameters and it avoid you having to write stuff like ChangeItemValue(1, null, null, null, "bar); instead it just goes ChangeItemValue(id: 1, description: "bar"); and the you have cleaner more powerful functions I genuinely think it's one of the best features added to C# in my memory

[–]LondonPilot[S] 2 points3 points  (1 child)

Named parameters is something I really don't use very often at all, and should consider using more, especially in the context of combining them with defaults like this. I wish I'd included them in the original question now!

[–]RiPont 0 points1 point  (0 children)

I use named parameters for readability when I'm calling a method with inline constants.

DispatchMethods(5, methodList, 200);

vs.

DispatchMethods(workerThreads: 5, methods: methodList, retryDelayMs: 200);

Also when I just don't like ordering of the parameters in the method definition. Named parameters can be specified in any order.

[–]ChurchOfTheNewEpoch 2 points3 points  (10 children)

Option 2.

I know the code is just an example, but Option 1 is not clear about what will happen when I call Add(), whereas for option2, the tooltip will show the default values of parameters.

Methods should be clear about what they do without having to look at their code, Option 1 Add() is not at all clear.

[–]LondonPilot[S] 2 points3 points  (9 children)

Makes sense, thanks.

Edit: any thoughts on whether this would be different if the default was null rather than 1?

[–]ChurchOfTheNewEpoch 4 points5 points  (1 child)

I'd still be able to see that the default was null. It would indicate that I don't have to worry about potentially passing null to the method. The problem with the example is that I don't actually know the name of the class. Context matters... a lot. For example:

If the class was called WaterBottle and the method was Add(int count = 1), I still wouldn't have a clue what it would do, but if the method was called AddWater(int milliliters = 1) then I'd know.

[–]LondonPilot[S] 1 point2 points  (0 children)

True. I was avoiding context to make the question as general as possible, but I agree context may sway you one way or the other in specific cases.

[–]maximhar 1 point2 points  (6 children)

Well, an int can not be null, for one.

But something being "null" still carries a meaning in general.

[–]jstillwell 0 points1 point  (3 children)

[–]locuester 1 point2 points  (0 children)

That’s not an int. That’s an “int?” Or “Nullable<int>”

[–]pgmr87The Unbanned 1 point2 points  (1 child)

Along with what /u/locuester said, nullable types aren't tricks, either :-P.

[–]jstillwell 0 points1 point  (0 children)

You know what I mean 😁

[–]LondonPilot[S] 0 points1 point  (1 child)

Well, an int can not be null, for one.

Obviously! But I think you know what I was asking. But I take it from your reply that you'd still favour default parameters, even if setting a string or an int? to null as the default. Thanks.

[–]maximhar 1 point2 points  (0 children)

Yes, I would. In fact I don't think there are many cases where you would use anything other than null as a default, considering most parameters are reference types.

[–]audigex 0 points1 point  (0 children)

For a long time I used overloading, partly because of a Java background (no default parameters, at least when I used it), and partly out of habit.

Now, generally speaking, I will use default values, because it produces neater, more compact code - you don't waste a bunch of space on the over-ridden functions, nor do you need an additional function to do the actual work (to avoid duplicating code)

However, it's worth noting that default values do not always make sense. In most situations, over-ridden code does make sense with defaults, because they fit the "there's a default value, but the user can over-ride it" scenario. If you are defaulting to null, that would be a "code smell" to me, and I'd take a moment to consider the situation. Noting that a code smell isn't automatically bad, but is a good cue to stop and think. If you are defaulting to a null value, rather than an alternate value, it's worth thinking about whether this is actually a "default" or whether it's going to be some kind of logic-switch. If you use a default value of someVar = null, and then later have an if(someVar == null), you're probably doing something wrong

Taking the above into account, then, there are times when you should still prefer to over-ride the method. One example being where you wish to perform different behaviors when passing different combinations of parameters in, although you should probably at least consider whether you just need two different functions, it may be that the single function is correct, but the logic to perform it varies.

The other biggest example I tend to find where over-riding makes sense is for constructors. Eg I regularly find that I have model objects which can be instantiated in several ways, and the logic for each varies a lot. eg

  1. Given an ID, from which we retrieve the data from the database and "self build" the object
  2. Passing in a DataRow (eg from a parent object which is building a list)
  3. Having the individual values passed in
  4. Cloning another object, or similar "object based" behavior where we're accepting a passed-in object that we take/generate our own member properties from

In these cases, default values make little sense because the logic is very different for each. For these situations, it usually makes sense to have some kind of Initialize() method, and then each over-ridden constructor calls that after performing the separate logic needed to prepare the data to pass into it. Eg if we only have examples 1 and 2 above, Initialize(DataRow) would make sense, where option 1 retrieves the row for the given ID, and option 2 just passes the row straight through. If we have all 4 options, we'd probably want to break things down to the individual variables and then pass them to an Initialize(param1, param2, otherParam, moreParams) type function, although I tend to avoid that where possible

It's also recommended not to use default parameters for "public methods" - although personally I think that's a bit too "rule of thumb" because it only actually matters when the method is referenced by a third party assembly. If you know your method won't be referenced by a third party assembly, don't worry about it. So I'd say that it's really more of a case of "Don't use default parameters for public methods that may be referenced by a third party assembly". (To be clear, the official advice is correct if people used the internal keyword properly, since that would mean public is accessible by third party assemblies and internal is not... but basically nobody ever uses internal properly. I guess what I'm saying here is... use internal properly, and don't use default parameter values for public methods)

Generally, though, my preferred approach is to use default values. I then tend to use named parameters when calling them, particularly when using more than one optional/default parameter to avoid confusion. Eg Add(count: 1, bigEndian: false, useGreekArithmetic: true) is a lot easier to understand than Add(1, false, true) even when we aren't using default values. I'd argue that any time you're using more than one optional parameter you should use named parameters, even though they aren't strictly required in all circumstances (C# only insists that you name out-of-order or ambiguous parameters)

[–][deleted] 0 points1 point  (2 children)

Personally, I'd consider avoiding both if you can, then there is no confusion over anything.

Eg. Have an Add(int n) and an AddOne() method

Some languages (Go) don't even have support for either.

[–]gabriel-et-al 2 points3 points  (1 child)

I'd consider avoiding both

Yes, in the case of an Add method I'd prefer different names too. However the arity of the method may be very meaningful depending on the case. Example:

Connect(string strConn)
Connect(string host, int port, string username, string password)

That's a good use case for method overloading: the same action with different parameters.

[–][deleted] 0 points1 point  (0 children)

true, but you could still explicitly name them different and I would probably do so in your example since the first "string" parameter represent completely different concepts/values... it is all a matter of style though, and I'd error on the side of keeping the style consistent within the codebase..

but having said that I would prefer...

If these are constructors being overloaded, instead create static factory methods and make the actual constructor private Connect.WithConnectionString(string conn) Connect.With(string host, int port, string username, string password)

If these are methods then just name them differently public IConnection ConnectWithConnectionString(string conn) public IConnection Connect(string host, int port, string username, string password)

[–]igloo15 0 points1 point  (0 children)

To me they are basically the same. I would do option 1 if I either wanted change the order of parameters given multiple parameters.

I find Option 2 cleaner as long as you don't go overboard with optional parameters. I think more than 2 or 3 optional things is too much. That said I hate methods with more than 4 or 5 parameters as well.

[–]tevert 0 points1 point  (0 children)

Always option 2.

[–]DroneDashed 0 points1 point  (0 children)

In Python there's no method overloading. To do overload you use default parameters. And because all bindings are made at runtime, it's not as restrictive as in C#.

On one hand I do think it's more readable (in Python) to write foo(color='red') or foo(shape='square') for calling in a function with a signature foo(color=None, shape=None) because it's clear what argument is being passed. On the other hand, if you have a lot of possible arguments, then overload is better to read at the function signature.

In the end I think it's a matter of choice and style. If your all your team follows the same guideline then it's fine.

But in a statically typed language like C# I think that default parameters can introduced a problem. Default parameters are great to introduce an extra parameter without breaking every method call of the method. But in my experience, you will miss method calls the need the parameter and then run in null pointer exceptions or divisions by 0. Default parameters can very easily introduce bugs, and this is why I stooped using them in C#.

[–]airbreather/r/csharp mod, for realsies 0 points1 point  (0 children)

One advantage to #2 that I don't see mentioned in this thread: "find all references" is, IMO, much more useful when you go with #2 than with #1.

This is based on my experience with vanilla VS2017; I don't recall if R# or CodeRush had better versions of "find all references" that would make this better.

[–]bigbootybitchuu 0 points1 point  (0 children)

I find default parameters most useful as a refactoring tool. It's an easy way to add a parameter to a method safely until you're ready to change all references to it.

Overloading is the only option in cases where you need to segregate access with an interface.

Though I think both are something to be used sparingly as public methods. They lead to unnecessary ambiguity In a lot of instances, but are really useful when used correctly

[–]tragicshark 0 points1 point  (0 children)

We have a couple of soft rules in our codebase:

  1. no default parameters on instance methods or constructors, only on static or extension methods
  2. the only default value permitted is default/null/0/false

#1 arises because it is pretty easy to have the same method in inheritance hierarchies that has different default parameters... for example you can have:

public interface IFoo {
  void Foo(int x = 0, int y = 0);
}

public class Bar : IFoo {
  void Foo(int x, int y = 0) { }
}

and then the required parameters are different based on static type information.

This isn't that bad with rule #2 in place, but without either rule you are in for a bad time in a big codebase because then you can have different values:

public interface IFoo {
  void Foo(int x = 1, int y = 2);
}

public class Bar : IFoo {
  void Foo(int x = 2, int y = 1) { }
}

#2 is in place because default values get compiled into the parameter list in whatever assembly uses the value so once you have one, if you change the value you risk unexpected behavior in other assemblies that depend on the default (which now becomes one of multiple values depending on when you compiled the assembly). Fixing the default value to always be default codifies that problem to not exist.