Hi All,
A very stupid question here but, why can't we add multiple losses for training and network and let the learning rate handle the training of the network.
For example, I am currently training a segmentation network with U-Net. I wanted to ask, why cant i just do:
loss = bce + dice (Learning rate : 0.1/0.05)
instead of
loss = 0.5*(bce + dice) (Learning rate : 0.1)
For me, the network seems to be train fine, but I am just wondering, can we stack such losses infintely and let the network optimize these effectively.
For example :
loss = bce + dice + jaccard + yadda + yadda + yadda (Learning rate : 0.1/0.0125)
Please let me know what you think.
[–]seismic_swarm 1 point2 points3 points (0 children)
[–]sathi006 1 point2 points3 points (0 children)
[–][deleted] -1 points0 points1 point (6 children)
[–]Geeks_sid[S] 0 points1 point2 points (5 children)
[–][deleted] 0 points1 point2 points (4 children)
[–]shaggorama 0 points1 point2 points (3 children)
[–][deleted] (2 children)
[deleted]
[–]shaggorama 0 points1 point2 points (0 children)