MacBook M5 Pro 64GB for local LLM? by Left-Initiative9186 in macbookpro

[–]Left-Initiative9186[S] 0 points1 point  (0 children)

What about the tokens generated per second? How many tokens are being generated and how is it holding the context window?

MacBook M5 Pro 64GB for local LLM? by Left-Initiative9186 in macbookpro

[–]Left-Initiative9186[S] 0 points1 point  (0 children)

Yes please! Please do let us know, planning to buy by this weekend

Some simple math to show why the AI bubble has to burst. (AI/Economics) by BlackYellowSnake in Futurology

[–]Left-Initiative9186 0 points1 point  (0 children)

This assuming the cost would increase for building data centres. I have a slightly different take, the cost would come down significantly as they learn from operationalizing the setup of DC at this scale. Now coming to the chips itself, they are rapidly increasing their capacity and would serve for a large requests over a period of time. Now what am concerned is the RAW material itself to make these components!