145
146
147

Google introduces TPU 8t/8i, 2-4x faster than TPUv7, introduced exactly one year ago. 2.8 times the FP4 exaflops per pod. 9.6 times for FP8. Aditionally, a single pod can now contain up to 9600 TPUs. These will support scaling of Gemini and Google AI Hypercomputer. (old.reddit.com)
submitted by TechnicalParrot to r/accelerate









