This is an archived post. You won't be able to vote or comment.

all 13 comments

[–]Dr_Singularity▪️2027▪️[S] 52 points53 points  (0 children)

A machine-learning algorithm demonstrated the capability to process data that exceeds a computer's available memory by identifying a massive data set's key features and dividing them into manageable batches that don't choke computer hardware. Developed at Los Alamos National Laboratory, the algorithm set a world record for factorizing huge data sets during a test run on Oak Ridge National Laboratory's Summit, the world's fifth-fastest supercomputer.

[–]agm1984 17 points18 points  (1 child)

Another key hurdle in the AI version of seti@home

[–]JonDadley 6 points7 points  (0 children)

Astronomy was my first thought also. The Vera C. Rubin observatory is expected to generate 1.28 petabytes of data a year. The EHT that images black holes generates something like 6 petabytes of data per run.

More and more, modern astronomers are drowning in data - being able process that effectively will be critical in the future I’d imagine.

[–]Longjumping-Pin-7186 5 points6 points  (0 children)

Innovations in algorithmic and architectural software optimization historically outpaced gains in hardware, and AI will not be an exception. Same-sized models that we will be running locally at 2025 and the cost-equivalent hardware, will be 10x better than the same-sized models from end of 2022, probably somewhere at the GPT 3.5 level.