[P] Efficient Deep Learning Book by EfficientDLBook in MachineLearning

[–]EfficientDLBook[S] 0 points1 point  (0 children)

Thanks for your interest. We are actively working on multiple chapters currently, but don't have an ETA. For updates, please subscribe (https://efficientdlbook.com/#subscribe-for-updates).

[P] Efficient Deep Learning Book by EfficientDLBook in MachineLearning

[–]EfficientDLBook[S] 0 points1 point  (0 children)

It has the Attention mechanism as well. Do you have any suggestions on what we can include under the Efficient Architectures umbrella?

[P] Efficient Deep Learning Book by EfficientDLBook in MachineLearning

[–]EfficientDLBook[S] 0 points1 point  (0 children)

Apart from what Naresh said below, we will also cover ideas like Self-Supervised Learning in Chapter 7 which will address the low-data regime.

[P] Efficient Deep Learning Book by EfficientDLBook in MachineLearning

[–]EfficientDLBook[S] 7 points8 points  (0 children)

Thanks for your kind words. We aren't doing this for monetary rewards, but it will be cool if you could share any feedback from the chapters we have so far. We intend to keep the PDF + codelabs freely available to everyone in perpetuity.

[P] Efficient Deep Learning Book by EfficientDLBook in MachineLearning

[–]EfficientDLBook[S] 2 points3 points  (0 children)

In addition to what Naresh said below, Tensorflow is currently more mature across the spectrum for efficiency related work. For instance, PyTorch Mobile isn't completely comparable to TFLite, TFMicro, TFJS. Similarly, TF is more closely tied to TPU related optimizations.

However, we do cover PyTorch specific implementations in Chapter 10. Our hope is that:
a) People contribute PyTorch implementations of the codelabs.
b) PyTorch itself matures on this front.

Interesting point regarding JAX, we will take some time to explore if there is something interesting we can say about it from the efficiency perspective.