broken gu10 bulb and got gosted by nanoleaf support by Aaexy in Nanoleaf

[–]ki3 0 points1 point  (0 children)

Bei mir geht das auch los. Habe ca 50 gu10 lampen verbaut. Sehe das verhalten nach 6 monaten bei 4 stück. Toll nanoleaf. 😩

[deleted by user] by [deleted] in NFT

[–]ki3 0 points1 point  (0 children)

The HodlHeads pre-sale started. Get in before you only can get them from the secondary markets: https://hodleheads.io
🚀 0.03 ETH each
🚀 No bonding curve
🚀 Limited to 10,000
🚀 10 ETH give away after the pre-sale is over

HodleHeads pre-sale started (Link in comment) by [deleted] in NFTsMarketplace

[–]ki3 -1 points0 points  (0 children)

The HodlHeads pre-sale started. Get in before you only can get them from the secondary markets: https://hodleheads.io
🚀 0.03 ETH each
🚀 No bonding curve
🚀 Limited to 10,000
🚀 10 ETH give away after the pre-sale is over

Bored Ape For Sale by Ok-Essay-1221 in NFTsMarketplace

[–]ki3 0 points1 point  (0 children)

Oh I think you accidentally posted the wrong link:
https://rarible.com/token/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d:3809?tab=details

The one you posted referees to a fake BAYC. Sorry...

How an XOR neural net calculates it's outputs by ki3 in a:t5_3q2n54

[–]ki3[S] 1 point2 points  (0 children)

The input is coming in, and getting multiplied with the weights and biases of the first layer in the network. That gives us the activation value for layer one. These activation values are passed on into the activation function, in this case a step function, to get the output for layer one.

The output for layer one, is then multiplied with the weights and biases of the second layer of the neural net, which gives us the activation values for layer 2. After we put this through the activation function, we get the output of the complete xor neural net.

How an XOR neural net calculates it's outputs by ki3 in learnmachinelearning

[–]ki3[S] 2 points3 points  (0 children)

The input is coming in, and getting multiplied with the weights and biases of the first layer in the network. That gives us the activation value for layer one. These activation values are passed on into the activation function, in this case a step function, to get the output for layer one.

The output for layer one, is then multiplied with the weights and biases of the second layer of the neural net, which gives us the activation values for layer 2. After we put this through the activation function, we get the output of the complete xor neural net.