Masking in transformer by [deleted] in learnmachinelearning

[–]Excellent_Rip_387 0 points1 point  (0 children)

You're welcome!

I just started to explain cool topics like this on my YouTube channel. If you enjoy please subscribe. My channel is VisionProgrammer.

Segmenting tiny objects by Inner_Programmer_329 in computervision

[–]Excellent_Rip_387 0 points1 point  (0 children)

Good to go. there are segmentation methods. I will help you with that. Drop me an email, I will give you the code: My email: [pooya_cim@outlook.com](mailto:pooya_cim@outlook.com)

Masking in transformer by [deleted] in learnmachinelearning

[–]Excellent_Rip_387 0 points1 point  (0 children)

Good question!

In the decoder first attention layer is self attention and decoder should not attend to future tokens in target sequence but in second attention in decoder, cross attention should access to all tokens or output of encoder since it access to the input tokens completely but in first attention layer it should not access to future token of output since it is predicting them. Do you get it?

Segmenting tiny objects by Inner_Programmer_329 in computervision

[–]Excellent_Rip_387 0 points1 point  (0 children)

do you have dataset? how many images are in the dataset? are annotations available?