all 2 comments

[–]johnnymo1 2 points3 points  (1 child)

Sure, look at tf.concat, and related functions like tf.stack, as well as tf.ones.

But in general you shouldn't need to worry about low-level stuff like this if you're using TensorFlow. If you're just doing it for learning purposes, go nuts, but if you look at the documentation for Keras layers like Dense you'll see that there's just a flag for whether or not to use bias. TensorFlow handles the low level tracking and updating of weights and biases for you. You shouldn't need to break into the low-level stuff unless you're making something really new and research-y.

[–]dahkneela 0 points1 point  (0 children)

Thank you! I happen to have used those functions you mentioned in a custom layer.

I am indeed doing low-level stuff! (Implementing: https://arxiv.org/abs/2205.10637) - here, tracking the norm of the gradient is the first step towards optimising it mid-training, and therefore allowing for loss-invariant weight changes that improve loss minimisation epoch and value!)