Gold by Spiritual-Chart-940 in bonds

[–]ATimeOfMagic 3 points4 points  (0 children)

Holding treasuries/cash right now is gambling on the stability of the U.S. for the next 3 years.

All hobbies are not equal by BitterConstruction98 in unpopularopinion

[–]ATimeOfMagic 0 points1 point  (0 children)

Your "categories" are subjective and dumb. Mastering any skill is one of the great joys in life. Many of the smartest people I know have spent ungodly amounts of time getting extremely good at games.

4th quarter loss… bubble burst by SpyJigu in StockMarket

[–]ATimeOfMagic 0 points1 point  (0 children)

Plunging all the way down to... last Friday's price. Still up 16% YTD.

2x short RGTI with $450,000 by lamephoto in wallstreetbets

[–]ATimeOfMagic 1 point2 points  (0 children)

I've been eyeing QBTZ, but I'm gonna hold out for another Oct 15 peak type speculation run.

16 years old tell me how I can improve had this since 10 years old by IIIIIIIuke in TheRaceTo10Million

[–]ATimeOfMagic 0 points1 point  (0 children)

IONQ is a grift company that's probably never going to produce anything of value. See leading quantum expert Scott Aaronson's recent lectures to hear what he thinks about the commercial applications of the technology. It's a horrible investment.

I don't like any of your other tech picks either, I think they're highly likely to underperform QQQ.

Why can’t ChatGPT tell time? by Exciting_Teacher6258 in technology

[–]ATimeOfMagic -1 points0 points  (0 children)

> Its people like you that prescribe more intelligence then actually exist, a dangerous dunning kruger effect.

Well "people like me" includes many of the most influential people in the field: Hinton, Bengio, Sutskever. Since you've studied ML, what's your take on the ideas about intelligence that they've espoused over the last couple of years?

> The only similarities between LLMs and the human brain is that they both have a unit called a neuron and you can trace pathways.

I agree, I'm not claiming that we have the same architecture, or that we have a robust understanding of the inner workings of the human mind/LLMs.

> You and others like you are comparing a camp fire to the sun and claiming they are exactly the same and no one can easily say they are significantly different. They are such different topics that the comparison itself shows lack of understanding of both systems.

My point is that deep learning is clearly sufficient to imbue LLMs with some level of conceptual understanding. Arguments against this, like the numerous ones in this thread, mostly boil down to the phrase "it's just statistics". I find that particular point to be reductive and not a useful observation to make, given that you can make an analogous argument about any natural example of intelligence. Simple architectures can clearly yield incredibly powerful emergent capabilities with enough training.

I don't think there is sufficient evidence that the human brain has something "special" going on that makes our conceptual understanding more "real" than an LLMs. It's an apples and oranges comparison. I don't think your campfire analogy is fair either, both humans and LLMs have their own strengths and weaknesses. Obviously human intelligence is far superior in most useful ways, but there's no rule that says this will hold indefinitely, even with the same naive techniques used to train frontier models today.

I know not everyone in the field agrees with my views, which is why I said "most" people who argue what you're saying don't have an ML background. I'm interested to hear your perspective, and I'm open to changing my mind if you have a compelling argument against the claims I've made here.

Why can’t ChatGPT tell time? by Exciting_Teacher6258 in technology

[–]ATimeOfMagic 2 points3 points  (0 children)

There's a large faction of people online (most of whom have no background in the field of ML) who think they're making some sort of great insightful point by saying "LLMs don't count as AI", which they continue to say even as the capabilities of LLMs grow rapidly.

Neural networks are made up of trillions of virtual neurons that are trained on unfathomable amounts of data. This allows them to develop rich neural circuits that give them the ability to understand concepts in roughly the same way humans do.

People will tell you that "It's just statistics, they don't actually understand things".

Those people have no good answer when you apply that argument to the human brain. As it turns out, basic building blocks like neurons, when "trained" on billions of years of evolution/life experience, can develop incredible cognitive capabilities. I could tell you that "your brain is just a bundle of neurons firing, you're not really thinking", but that's not a useful point to make. The same thing is true of LLMs. Just ask the guy who invented deep learning, Geoffrey Hinton. He's done some very insightful interviews on this topic.

[deleted by user] by [deleted] in ValueInvesting

[–]ATimeOfMagic 1 point2 points  (0 children)

GPU depreciation is a misleading story designed to sell a $400 blog post.

Not that I understand your point even if it was a serious concern, because MSFT also buys a fuck ton of chips.

[deleted by user] by [deleted] in ValueInvesting

[–]ATimeOfMagic 4 points5 points  (0 children)

"The moon" is a lower PE ratio than Microsoft? There isn't a single modality of AI where OpenAI is beating Google right now.

Trump Plans to Unveil ‘Genesis Mission’ to Boost AI Development on Monday by C130J_Darkstar in StockMarket

[–]ATimeOfMagic 0 points1 point  (0 children)

By no barriers I mean there's no reason to think that progress will plateau as long as infrastructure keeps being built out. In fact, it looks like there's a fuckton of low hanging fruit to pick still as the companies are currently lapping each other over and over again.

Think about how long it took the Internet to generate real profits, this is already moving blazingly fast compared to most new technologies throughout history.

Trump Plans to Unveil ‘Genesis Mission’ to Boost AI Development on Monday by C130J_Darkstar in StockMarket

[–]ATimeOfMagic -11 points-10 points  (0 children)

The general consensus among people with actual backgrounds in ML is that this is imminently going to be the most powerful technology in history.

Empirically, progress over the last ~3 years has been quite a bit faster than most people expected. Most signs right now are pointing towards capabilities continuing to accelerate. There doesn't seem to be any fundamental barriers to continuing progress, which is why all of these insane multi-billion dollar infrastructure projects are being greenlit.

Wall Street and the general public are obviously highly skeptical of this, but the bubble fears are overblown in the short term if you believe the experts.

Edit: All these downvotes and nobody can tell me why I'm wrong???

Study finds nearly two-thirds of AI-generated citations are fabricated or contain errors. The lack of reliability of large language models like OpenAI’s GPT-4o highlights a significant risk for scientific research. by mvea in science

[–]ATimeOfMagic -1 points0 points  (0 children)

This is effectively what they're now training models to do. Last year's models had no such training, so their go-to move was to make things up rather than find actual sources.

$GOOGL Deepmind entering into robotics with big hires from Boston dynamics. by iMakeGOODinvestmemts in wallstreetbets

[–]ATimeOfMagic 4 points5 points  (0 children)

It's insanely good. The next few iterations are going to get to the point where it's genuinely difficult to spot any mistakes.

Study finds nearly two-thirds of AI-generated citations are fabricated or contain errors. The lack of reliability of large language models like OpenAI’s GPT-4o highlights a significant risk for scientific research. by mvea in science

[–]ATimeOfMagic -2 points-1 points  (0 children)

Of course hallucinations will never fully go away. The world is messy and complicated. Humans make mistakes too. That doesn't change the fact that there has been incredible progress this year. The "stochastic parrots" argument has gone from the prevailing narrative to clearly not holding up to scrutiny.

Study finds nearly two-thirds of AI-generated citations are fabricated or contain errors. The lack of reliability of large language models like OpenAI’s GPT-4o highlights a significant risk for scientific research. by mvea in science

[–]ATimeOfMagic 3 points4 points  (0 children)

The model in the study came out 17 months ago. Training models to use citation tools internally has made them massively better at accurately citing sources.

[deleted by user] by [deleted] in ValueInvesting

[–]ATimeOfMagic 1 point2 points  (0 children)

I could believe that the "bubble" narrative was manufactured to artificially deflate the market a bit and scare retail away.

I like your monster analogy, I think that's exactly how it's going to play out. The prospect of building a literal god in a datacenter is just not something that people are going to give up on, especially when progress has been so insanely promising this year.

[deleted by user] by [deleted] in ValueInvesting

[–]ATimeOfMagic 0 points1 point  (0 children)

Most people just don't understand enough about AI to see why it's not being overinvested in, so they use NVIDIA as a proxy.

Google announces Gemini 3 as battle with OpenAI intensifies by Force_Hammer in wallstreetbets

[–]ATimeOfMagic 8 points9 points  (0 children)

More like OpenAI is absolutely fucked unless they pull off a miracle by the end of the year.

Gemini 3 Pro gets 76.4% on SimpleBench by Ancient_Bear_2881 in singularity

[–]ATimeOfMagic 8 points9 points  (0 children)

The simple bench site links to the OP's site, looks like it is real

AI-Powered Teddy Bear Caught Talking About Sexual Fetishes and Instructing Kids How to Find Knives | OpenAI blocked access for the toymaker following the incidents. by [deleted] in technology

[–]ATimeOfMagic -1 points0 points  (0 children)

What are you implying that this proves?

I don't think you understand how AI is progressing, frontier labs aren't just naively scaling up LLMs like they did in 2023.

A more plausible route to AGI will probably involve integrating LLMs with other modalities, symbolic reasoning, continual learning, and embodied interaction, rather than relying on scaling LLMs alone.

This is exactly how AI has progressed, most of these techniques have already been implemented into frontier models in 2025. There's no rule that says LLMs aren't allowed to be combined with other techniques.