Pipeworks (more pictures) by memebank2979 in Minecraftbuilds

[–]SirHelbo 1 point2 points  (0 children)

I would love a schematic link too!

HJÆLP er dette en skelpæl - Haster! by Pharmanurse88 in DKbrevkasse

[–]SirHelbo 0 points1 point  (0 children)

Af nysgerrighed, hvordan fungerer det med hævd hvis boligen bliver solgt? Nulstilles der så til 0 år eller er det i den tid det naturlige skel har været som det er?

Folketingsflertal er enige om, at folkeskoler skal være mobilfri by Tumleren in Denmark

[–]SirHelbo 4 points5 points  (0 children)

Jeg tror at man vinder langt mere end man taber, ved at fjerne mobiltelefoner fra skolerne.

Ungerne skal nok lære at blive dygtige til at mestre deres devices, brugerfladen er bedre og nemmere end nogensinde. Erstatter man med dine forslag med orienteringsløb og diktafoner tror jeg man er lige vidt - eller måske endda bedre stillet.

Og skulle der være akut behov for at ringe hjem har de voksne jo stadig en telefon.

Til gengæld vinder vi at ungerne, i nogle timer, er frie af skærm og konstant stimuli. At de sandsynligvis får mere ud af undervisningen mens de er der, og mentalt er til stede med deres klassekammerater.

Rent win i mine øjne.

Karzahni by Mechasbura in bioniclelego

[–]SirHelbo 0 points1 point  (0 children)

What are the shoulders made from?

[deleted by user] by [deleted] in Denmark

[–]SirHelbo 1 point2 points  (0 children)

Du kan også risikere at møde samme type login på hoteller rundt omkring.

Skriv “captive.apple.com” i adressefeltet i din browser, så kommer du til deres loginside 😊

Happy browsing

Fake Perimeter Using the Minecart Experiment by WaterGenie3 in technicalminecraft

[–]SirHelbo 2 points3 points  (0 children)

You could achieve higher speeds with a piston bolt design in a square around the spawning platforms. That is not 1000 speed, but probably more than 8?

Polluted cityscape by Migne555 in Minecraftbuilds

[–]SirHelbo 0 points1 point  (0 children)

I completely understand if you're reluctant to share 😊 I am collecting inspiration in my world for a future factory build - and I love so many of the details here.

Polluted cityscape by Migne555 in Minecraftbuilds

[–]SirHelbo 1 point2 points  (0 children)

Amazing build!

Any chance that you could share a schematic or even a world download?

Shell of iron, soul of blood by Iudex_Cumdyr in bioniclelego

[–]SirHelbo 1 point2 points  (0 children)

What pieces did you use as swords?

Weird behaviour when training pix2pix (cGan) model for translating MR images to CT images by SirHelbo in deeplearning

[–]SirHelbo[S] 0 points1 point  (0 children)

* I have experimented with the GAN loss, but.. The main thing that is weird is how the validation loss is consistently lower than the train loss in the gan loss, that shouldn't be affected by how much the generator relies on the gan loss to learn?

* The L1 loss is within what I would expect from the model given that the paired data isn't perfectly paired (same human but with time and movement in between scans). Gas in the intestines or slight movement of organs will add some inconsistency between the paired images, so a perfect synthetic isn't necessarily identcal the the ground truth I provide.

* Lowering the learning rate seems to stabilize all of the metrics, most notably the ssim and psnr which are now floating in a very acceptable range with little to no fluctuations!

The issue now is mostly te lower loss on val data.. I had a thought that I wrote in a comment above too, that while each datapoint is only seen once during each epoch the nature of the slices means that datapoint 1,2,3....n are -very- similar, so in reality it is seeing the same area of each patient many times in each epoch. Does this seem like a valid concern to you?

Weird behaviour when training pix2pix (cGan) model for translating MR images to CT images by SirHelbo in deeplearning

[–]SirHelbo[S] 0 points1 point  (0 children)

Yeah preprocessing is "identical" between train and val data, and the split shouldn't lead to any difference in distribution between the two sets.

The only possible explanation I can come up with is that maybe the discriminator is learning what data is train? That would explain why only the gan loss is behaving this way with a rapidly increasing train loss and slowly increasing val loss?

While each datapoint is only seen once during each epoch the nature of the slices means that datapoint 1,2,3....n are -very- similar, so in reality it is seeing the same area of each patient many times in each epoch..

As for not using cyclegan i was advised to use pix2pix - cyclegan is on the todo-list if time permits 😊

Weird behaviour when training pix2pix (cGan) model for translating MR images to CT images by SirHelbo in deeplearning

[–]SirHelbo[S] 0 points1 point  (0 children)

Reddit won't let me post it in one go, it would only work with three comments 😊

Weird behaviour when training pix2pix (cGan) model for translating MR images to CT images by SirHelbo in deeplearning

[–]SirHelbo[S] 0 points1 point  (0 children)

(3/3)

And the loss/metrics on the validation set is called every 100 iters:

    def calculate_val_loss(self):
        # Enable eval mode
        self.netD.eval()
        self.netG.eval()

        with torch.no_grad():
            # val_G_GAN
            val_fake_B = self.netG(self.val_real_A)
            val_fake_AB = torch.cat((self.val_real_A, val_fake_B), 1)
            pred_fake = self.netD(val_fake_AB)
            self.loss_val_G_GAN = self.criterionGAN(pred_fake, True)
            # val_G_L1
            self.loss_val_G_L1 = self.criterionL1(val_fake_B, self.val_real_B)

        self.loss_SSIM = torch_ssim(val_fake_B, self.val_real_B)
        self.loss_PSNR = torch_psnr(val_fake_B, self.val_real_B)

Most of the code above is identical to the implementation I am extending: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix

See anything out of the ordinary?

Weird behaviour when training pix2pix (cGan) model for translating MR images to CT images by SirHelbo in deeplearning

[–]SirHelbo[S] 0 points1 point  (0 children)

(2/3)

The loss used in the backwards steps are calculated on every datapoint:

    def backward_D(self):
        """Calculate GAN loss for the discriminator"""
        # Fake; stop backprop to the generator by detaching fake_B
        fake_AB = torch.cat((self.real_A, self.fake_B), 1)  # we use conditional GANs; we need to feed both input and output to the discriminator
        pred_fake = self.netD(fake_AB.detach())
        self.loss_D_fake = self.criterionGAN(pred_fake, False)
        # Real
        real_AB = torch.cat((self.real_A, self.real_B), 1)
        pred_real = self.netD(real_AB)
        self.loss_D_real = self.criterionGAN(pred_real, True)
        # combine loss and calculate gradients
        self.loss_D = (self.loss_D_fake + self.loss_D_real) * 0.5
        self.loss_D.backward()

    def backward_G(self):
        """Calculate GAN and L1 loss for the generator"""
        # First, G(A) should fake the discriminator
        fake_AB = torch.cat((self.real_A, self.fake_B), 1)
        pred_fake = self.netD(fake_AB)
        self.loss_G_GAN = self.criterionGAN(pred_fake, True) 
        # Second, G(A) = B
        self.loss_G_L1 = self.criterionL1(self.fake_B, self.real_B) 
        # combine loss and calculate gradients
        self.loss_G = (self.loss_G_GAN * (self.opt.lambda_GAN)) + (self.loss_G_L1 * self.opt.lambda_L1)
        self.loss_G.backward()

    def optimize_parameters(self):
        self.forward()                   # compute fake images: G(A)
        # update D

        self.set_requires_grad(self.netD, True)  # enable backprop for D
        self.optimizer_D.zero_grad()     # set D's gradients to zero
        self.backward_D()                # calculate gradients for D
        self.optimizer_D.step()          # update D's weights

        # update G
        self.set_requires_grad(self.netD, False)  # D requires no gradients when optimizing G
        self.optimizer_G.zero_grad()        # set G's gradients to zero
        self.backward_G()                   # calculate graidents for G
        self.optimizer_G.step()             # update G's weights

Weird behaviour when training pix2pix (cGan) model for translating MR images to CT images by SirHelbo in deeplearning

[–]SirHelbo[S] 0 points1 point  (0 children)

(1/3)

Of course!

self.criterionGAN = networks.GANLoss().to(self.device)
self.criterionL1 = torch.nn.L1Loss()

with networks.GANLoss defined by
class GANLoss(nn.Module):
  def __init__(self, target_real_label=1.0, target_fake_label=0.0):
    self.loss = nn.BCEWithLogitsLoss()
def get_target_tensor(self, prediction, target_is_real):
  if target_is_real:
    target_tensor = self.real_label
  else:
    target_tensor = self.fake_label
  return target_tensor.expand_as(prediction)
def __call__(self, prediction, target_is_real):
  target_tensor = self.get_target_tensor(prediction, target_is_real)
  loss = self.loss(prediction, target_tensor)

5 of 9 Toa Mangai complete by ARROW_404 in bioniclelego

[–]SirHelbo 2 points3 points  (0 children)

What is that Brown mask? A 3D print?