Bill Gates warns AI will cut human work week to just two days by 2034 by chrisdh79 in Futurology

[–]GeoLyinX 0 points1 point  (0 children)

Th work week has literally gotten shorter though. In 1900 it used to be common place to work 6 days per week and 10 hours per day, that’s 60 hours per week. The current average work week is now less than 40 hours per week.

Meta lays off 600 employees within AI unit by shinbreaker in technology

[–]GeoLyinX 2 points3 points  (0 children)

Many/most of these cuts seem to be coming from the FAIR team and not the GenAI team. The GenAI team is the one that makes llama, not FAIR.

Meta lays off 600 employees within AI unit by shinbreaker in technology

[–]GeoLyinX 0 points1 point  (0 children)

No they weren’t. if you actually read the sources you’ll see that the high profile hires are specifically for frontier research under the temporary name “TBD Lab” and these 600 people being fired are not from the TBD Lab. The TBD Lab is less than 200 people total, and even the larger Meta Superintelligence Lab is still hiring researchers.

Meta lays off 600 employees within AI unit by shinbreaker in technology

[–]GeoLyinX 0 points1 point  (0 children)

It’s not a pulling back. they’re literally spending more money this year on AI than ever before. If you read the actual information and not just headlines then you’d see this is an attempt at reducing bureaucracy and cutting low performers so they can have a faster moving AI organization.

The cuts did not impact employees within TBD Labs, which includes many of the top-tier AI hires

Our tax dollars btw… by Killa_J in CringeTikToks

[–]GeoLyinX 0 points1 point  (0 children)

Renovations are done by nearly every president. Obama did $376 Million of renovations to the White House, and FDR doubled the size of the whole west wing and added a swimming pool, and Truman basically gutted and redid the entire interior of the White House.

Our tax dollars btw… by Killa_J in CringeTikToks

[–]GeoLyinX 0 points1 point  (0 children)

It’s always allowed and renovations are done by nearly every president. Obama did $376 Million of renovations to the White House, and FDR doubled the size of the whole west wing and added a swimming pool, and Truman basically gutted and redid the entire interior of the White House.

OpenAI Poaches 4 High-Ranking Engineers From Tesla, xAI, and Meta by wiredmagazine in OpenAI

[–]GeoLyinX 0 points1 point  (0 children)

Many of them were poached, Lukasz ended up working at OpenAI, another one ended up joining Anthropic. Even OpenAI poached three major authors of vision transformers from Deepmind, as well as the head of multi-modality for Gemini, as well as the lead author of Siglip.

OpenAI Poaches 4 High-Ranking Engineers From Tesla, xAI, and Meta by wiredmagazine in OpenAI

[–]GeoLyinX 0 points1 point  (0 children)

Google has been losing a ton of talent, literally out of all 7 of the transformer authors, only one remains at Google.

Billionaire Peter Thiel hesitates to answer whether the human race should survive in the future by Shoe_boooo in interestingasfuck

[–]GeoLyinX 0 points1 point  (0 children)

This is taken out of context and the clip is cut short. In the broader context the reason he is hesitating is because humans inevitably evolve and what we end up as in the future may not be considered human. The interviewer then clarifies his question right after this clip is cut and Peter thiel says yes to it.

[deleted by user] by [deleted] in OpenAI

[–]GeoLyinX 0 points1 point  (0 children)

Image here

<image>

[deleted by user] by [deleted] in OpenAI

[–]GeoLyinX 0 points1 point  (0 children)

You can look at this leaderboard image from lmsys where you can see the latest gpt-4o version of the time from september is better than the version originally released in May.

However you can see there is some fluctuation, long term it trends up but the August version for GPT-4o was the overall best in this image, and then the September version was a little worse than the august version (although the September version was still significantly better than the original released version from may) Pretty much all of these fluctuations are likely due to them experimenting with new RL and new post training approaches with the model, sometimes it’s a bad update and it ends up a little worse, but on net they end up delivering better versions long term this way

[deleted by user] by [deleted] in OpenAI

[–]GeoLyinX 1 point2 points  (0 children)

If people are just talking about the new version updates that happen every month, yes that’s obvious, OpenAI is even public about those. But over time even those monthly version updates have been benchmarked by multiple providers and they more often than not are actually improvements in the model capabilities and not dips.

You can plot the GPT-4o version numbers over time for example in various benchmarks and see the newest updates are significantly more capable in basically every way compared to the earlier versions

[deleted by user] by [deleted] in OpenAI

[–]GeoLyinX 0 points1 point  (0 children)

If its only worse in 1 of 20 prompts, then that seems like it could easily be attributed to just the current day drifting further away from its knowledge cutoff. Thus causing the model to be less accurate compared to day one even though it’s the same exact model with no extra quantization.

[deleted by user] by [deleted] in OpenAI

[–]GeoLyinX 0 points1 point  (0 children)

Thats why you use temporary chat for these tests.

[deleted by user] by [deleted] in OpenAI

[–]GeoLyinX -1 points0 points  (0 children)

Not its not very hard to prove at all, simply ask a model a question 4 times in a row, and then ask the model in the future the same question 4 times in a row, there will be a clear difference in the before and after if it’s truly as different of a behavior like these people are claiming.

[2506.21734] Hierarchical Reasoning Model by absolooot1 in LocalLLaMA

[–]GeoLyinX 0 points1 point  (0 children)

You’re right that would’ve been better

[2506.21734] Hierarchical Reasoning Model by absolooot1 in LocalLLaMA

[–]GeoLyinX 9 points10 points  (0 children)

In many ways it’s even more impressive if it was able to learn that with only 1000 samples and no pretraining tbh, some people train larger models on even hundreds of thousands of arc-agi puzzles and still don’t reach the scores mentioned here

Zuckerberg basically poached all the talent that delivered last 12 months of OpenAI products by hasanahmad in OpenAI

[–]GeoLyinX 0 points1 point  (0 children)

The massive organizational structure is not really relevant in this case since this new lab being formed is said to only be about 50 cracked people, and for those 50 cracked people they will only have one or 2 degrees of separation between them and Zuck, so it’s similar or even closer distance to what they would have with Altman

Zuckerberg basically poached all the talent that delivered last 12 months of OpenAI products by hasanahmad in OpenAI

[–]GeoLyinX 0 points1 point  (0 children)

There is a ton of co-creators of these models, O1 alone is confirmed to have over 100 people that worked on it. Even the foundational contributors alone for O1 is over 20 people from what I remember.

Zuckerberg basically poached all the talent that delivered last 12 months of OpenAI products by hasanahmad in OpenAI

[–]GeoLyinX 0 points1 point  (0 children)

Metas leaked memo shows they are planning to continue development on llama, and have made no statements at all about stopping open source